id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,521,015
https://en.wikipedia.org/wiki/HomeRF
HomeRF was a wireless networking specification for home devices. It was developed in 1998 by the Home Radio Frequency Working Group, a consortium of mobile wireless companies that included Proxim Wireless, Intel, Siemens AG, Motorola, Philips and more than 100 other companies. The group was disbanded in January 2003, after other wireless networks became accessible to home users and Microsoft began including support for them in its Windows operating systems. As a result, HomeRF fell into obsolescence. Description Initially called Shared Wireless Access Protocol (SWAP) and later just HomeRF, this open specification allowed PCs, peripherals, cordless phones and other consumer devices to share and communicate voice and data in and around the home without the complication and expense of running new wires. HomeRF combined several wireless technologies in the 2.4 GHz ISM band, including IEEE 802.11 FH (the frequency-hopping version of wireless data networking) and DECT (the most prevalent digital cordless telephony standard in the world) to meet the unique home networking requirements for security, quality of service (QoS) and interference immunity—issues that still plagued Wi-Fi (802.11b and g). HomeRF used frequency hopping spread spectrum (FHSS) in the 2.4 GHz frequency band and in theory could achieve a maximum of 10 Mbit/s throughput; its nodes could travel within a 50-meter range of a wireless access point while remaining connected to the personal area network (PAN). Several standards and working groups focused on wireless networking technology in radio frequency (RF). Other standards include the popular IEEE 802.11 family, IEEE 802.16, and Bluetooth. Proxim Wireless was the only supplier of HomeRF chipsets, and since Proxim also made end products, other manufacturers complained that they had to buy components from their competitor. The fact that HomeRF was developed by a consortium and not an official standards body also put it at a disadvantage against Wi-Fi and its IEEE 802.11 standard. AT&T joined the group because HomeRF was designed for high-speed broadband services and the need to support PCs, phones, stereos and televisions; but last-mile deployment occurred more slowly than expected and with slower speeds. So it was natural that the home networking market focused more on multi-PC households sharing Internet connections for email and browsing than on integrating phone and entertainment services into a broadband service bundle. As a result, the original promoter companies gradually started pulling out of the group rather than supporting multiple standards. They included IBM, Hewlett-Packard, Compaq, Microsoft, and lastly Intel. That left only companies like Motorola, National Semiconductor, Proxim, and Siemens. Even Proxim started pulling away when negative media surrounding HomeRF started affecting its core data networking business, and that left Siemens to do the work of integrating voice, data and video. Siemens was willing to do it alone with HomeRF technology but was concerned by growing uncertainties in the cordless phone market, including mobile phone as home phone, VoIP over Wi-Fi, and 5 GHz vs. 2.4 GHz. When Siemens eventually got out of the cordless phone market, it was the final nail in the HomeRF coffin. HomeRF received some success because of its low cost and ease of installation. By September 2000, some confusion came from the "home" in the name, leading some to associate HomeRF with home networks, using other technologies such as IEEE 802.11b for businesses. A digital media receiver for audio was marketed under the name "Motorola SimpleFi" that used HomeRF. In March 2001, Intel announced they would not support further development of HomeRF technology for its Anypoint line. The group promoting 802.11 technology, the Wireless Ethernet Compatibility Alliance (WECA) changed their name to the Wi-Fi Alliance in 2002, as the Wi-Fi brand became popular. The fact that WECA members lobbied the FCC for two years, which was effective in delaying the approval of wideband frequency-hopping, helped 802.11b catch up and gain an insurmountable lead in the market, which was then extended with 802.11g. The use of OFDM in 802.11a and .11g solved many of the RF interference problems of .11b. WPA and 802.11x also improved security over WEP encryption, which was especially important in the corporate world. By January 2003 the Home Radio Frequency Working Group had disbanded. Archives of the HomeRF Working Group are maintained by Palo Wireless and Wayne Caswell. See also HomePlug - powerline home networking HomePNA - phoneline home networking ITU-T G.hn, a standard that provides a way to create a high-speed (up to 1 Gbit/s) local area network using existing home wiring (power lines, phone lines and coaxial cables). Wireless LAN Interoperability Forum References External links White Papers Home Networking Technologies - This basic (May 2001) white paper introduces the home networking market and applications and compares various technologies, including wired, no-new-wires, and wireless. Wireless Networking Choices for the Broadband Internet Home - This (2001) technical white paper examines three candidate wireless networking standards, HomeRF, Bluetooth and IEEE802.11, against the needs of service providers and consumers for the Broadband Internet home. The clear choice for this specific application based upon technical merit is shown here to be HomeRF. Only HomeRF provides simultaneous support for up to 8 toll-quality voice connections, 8 prioritized streaming media sessions and multiple Internet and network resource connections at Broadband speeds. And HomeRF accomplishes this with excellent comparative ratings for low cost, small size, low power consumption, interference immunity, security and support for high network density. HomeRFOverview & Market Positioning - (white paper) In an effort to preserve HomeRF information, this paper from Eamon Myers was extracted from PaloWireless.com just before the domain name was sold and its contents removed. A Comparison of Security in HomeRF versus IEEE 802.11b (2001) - Though the possibility of attacks similar to those leveled at 802.11b systems exist in theory for HomeRF systems, the relative level of difficulty is very different. HomeRF is stronger in preventing unauthorized access due to its frequency hopping technology and since attempts are not enabled by commercially available equipment. Interference Immunity of 2.4 GHz Wireless LANs (2001) - Of the three major technologies available for this band, only HomeRF is designed with a frequency agile physical layer and robust upper layer protocols to combat 2.4 GHz interference. This is what makes HomeRF the ideal wireless LAN technology for the home environment. Quality of Service in the Home Networking Model (2001) - The market for home networking will soon see rapid growth. In addition to traditional data networking, this market will be driven by the desire of consumers to have access to multimedia audio, video, and gaming services. The Quality of Service (QoS) requirements these demands have put on home networking technologies has led to new standardization activities designed to deliver the QoS consumers will demand. In this paper we discuss the many ways in which QoS can be delivered, and then focus on the specific attributes of the HomeRF standard that enable it to deliver high QoS voice and multimedia services over a wireless home networking infrastructure. A Vision of Next Generation Home Phone Systems (2001) - This paper describes the role of HomeRF, mobile phones, and VoIP over broadband and portrays a vision that combines them all. HomeRF: Designed for Homes & Ideal for Teleworkers (2001) - This article from NetworkWorld compares HomeRF and WiFi for home and remote work applications. HomeRF: Wireless Networking for the Connected Home (2000) - This technical article from IEEE Personal Communications describes how theWorking Group was formed and the "vision" for the SWAP protocol, which includes the ability to add new functionality by blending previously separate applications for voice, data, and entertainment. Home automation
HomeRF
[ "Technology" ]
1,632
[ "Home automation" ]
1,521,032
https://en.wikipedia.org/wiki/Fetoscopy
Fetoscopy is an endoscopic procedure during pregnancy to allow surgical access to the fetus, the amniotic cavity, the umbilical cord, and the fetal side of the placenta. A small (3–4 mm) incision is made in the abdomen, and an endoscope is inserted through the abdominal wall and uterus into the amniotic cavity. Fetoscopy allows for medical interventions such as a biopsy (tissue sample) or a laser occlusion of abnormal blood vessels (such as chorioangioma) or the treatment of spina bifida. Fetoscopy is usually performed in the second or third trimester of pregnancy. The procedure can place the fetus at increased risk of adverse outcomes, including fetal loss or preterm delivery, so the risks and benefits must be carefully weighed in order to protect the health of the mother and fetus(es). The procedure is typically performed in an operating room by an obstetrician-gynecologist. History In 1945, Björn Westin published a study which documented his use of a panendoscope to directly observe embryos. In 1966, Agüero et al. published a study which used hysteroscopy to observe various features of the fetus, cervix, and uterus. In 1972, Carlo Valenti of the SUNY Downstate Medical Center recorded a technique which he called "endoamnioscopy", which allowed for direct visualization of the developing fetus. Gallinat made the first attempt to standardize these techniques in 1978. Because of the invasiveness of these procedures and the high risk they posed to the fetus, they were largely discarded in favor of transvaginal sonography until the 1990s. By that time, smaller instruments had been developed which reduced the risk to the fetus and provided a better visual for the physician. This in turn allowed for the development of techniques for surgical interventions such as biopsy. By 1993, authors such as Cullen, Ghirardini, and Reece had referred to this technique as "fetoscopy". The field of minimally-invasive surgical fetoscopy has continued to develop since the 2000s. Physicians such as Michael Belfort and Ruben Quintero have used the technique to remove tumors and correct spina bifida on fetuses within the uterus. Non-surgical fetoscopes Fetoscopy is a surgical procedure which may involve the use of a fibreoptic device called a fetoscope. Some confusion may arise from the use of specialized forms of stethoscopes, including Pinard horns and Doppler wands, to audibly monitor fetal heart rate (FHR). These audio diagnostic tools are also called "fetoscopes" but are not related to visual fetoscopy. See also Diaphragmatic hernia Fetal intervention Minimally invasive surgery Myelomeningocele Twin-to-twin transfusion syndrome References External links What is fetal therapy? Obstetric surgery Medical equipment Endoscopy Tests during pregnancy
Fetoscopy
[ "Biology" ]
637
[ "Medical equipment", "Medical technology" ]
1,521,283
https://en.wikipedia.org/wiki/Hardy%E2%80%93Ramanujan%E2%80%93Littlewood%20circle%20method
In mathematics, the Hardy–Ramanujan–Littlewood circle method is a technique of analytic number theory. It is named for G. H. Hardy, S. Ramanujan, and J. E. Littlewood, who developed it in a series of papers on Waring's problem. History The initial idea is usually attributed to the work of Hardy with Srinivasa Ramanujan a few years earlier, in 1916 and 1917, on the asymptotics of the partition function. It was taken up by many other researchers, including Harold Davenport and I. M. Vinogradov, who modified the formulation slightly (moving from complex analysis to exponential sums), without changing the broad lines. Hundreds of papers followed, and the method still yields results. The method is the subject of a monograph by R. C. Vaughan. Outline The goal is to prove asymptotic behavior of a series: to show that for some function. This is done by taking the generating function of the series, then computing the residues about zero (essentially the Fourier coefficients). Technically, the generating function is scaled to have radius of convergence 1, so it has singularities on the unit circle – thus one cannot take the contour integral over the unit circle. The circle method is specifically how to compute these residues, by partitioning the circle into minor arcs (the bulk of the circle) and major arcs (small arcs containing the most significant singularities), and then bounding the behavior on the minor arcs. The key insight is that, in many cases of interest (such as theta functions), the singularities occur at the roots of unity, and the significance of the singularities is in the order of the Farey sequence. Thus one can investigate the most significant singularities, and, if fortunate, compute the integrals. Setup The circle in question was initially the unit circle in the complex plane. Assuming the problem had first been formulated in the terms that for a sequence of complex numbers for , we want some asymptotic information of the type , where we have some heuristic reason to guess the form taken by (an ansatz), we write a power series generating function. The interesting cases are where is then of radius of convergence equal to 1, and we suppose that the problem as posed has been modified to present this situation. Residues From that formulation, it follows directly from the residue theorem that for integers , where is a circle of radius and centred at 0, for any with ; in other words, is a contour integral, integrated over the circle described traversed once anticlockwise. We would like to take directly, that is, to use the unit circle contour. In the complex analysis formulation this is problematic, since the values of may not be defined there. Singularities on unit circle The problem addressed by the circle method is to force the issue of taking , by a good understanding of the nature of the singularities f exhibits on the unit circle. The fundamental insight is the role played by the Farey sequence of rational numbers, or equivalently by the roots of unity: Here the denominator , assuming that is in lowest terms, turns out to determine the relative importance of the singular behaviour of typical near . Method The Hardy–Littlewood circle method, for the complex-analytic formulation, can then be thus expressed. The contributions to the evaluation of , as , should be treated in two ways, traditionally called major arcs and minor arcs. We divide the roots of unity into two classes, according to whether or , where is a function of that is ours to choose conveniently. The integral is divided up into integrals each on some arc of the circle that is adjacent to , of length a function of (again, at our discretion). The arcs make up the whole circle; the sum of the integrals over the major arcs is to make up (realistically, this will happen up to a manageable remainder term). The sum of the integrals over the minor arcs is to be replaced by an upper bound, smaller in order than . Discussion Stated boldly like this, it is not at all clear that this can be made to work. The insights involved are quite deep. One clear source is the theory of theta functions. Waring's problem In the context of Waring's problem, powers of theta functions are the generating functions for the sum of squares function. Their analytic behaviour is known in much more accurate detail than for the cubes, for example. It is the case, as the false-colour diagram indicates, that for a theta function the 'most important' point on the boundary circle is at ; followed by , and then the two complex cube roots of unity at 7 o'clock and 11 o'clock. After that it is the fourth roots of unity and that matter most. While nothing in this guarantees that the analytical method will work, it does explain the rationale of using a Farey series-type criterion on roots of unity. In the case of Waring's problem, one takes a sufficiently high power of the generating function to force the situation in which the singularities, organised into the so-called singular series, predominate. The less wasteful the estimates used on the rest, the finer the results. As Bryan Birch has put it, the method is inherently wasteful. That does not apply to the case of the partition function, which signalled the possibility that in a favourable situation the losses from estimates could be controlled. Vinogradov trigonometric sums Later, I. M. Vinogradov extended the technique, replacing the exponential sum formulation f(z) with a finite Fourier series, so that the relevant integral is a Fourier coefficient. Vinogradov applied finite sums to Waring's problem in 1926, and the general trigonometric sum method became known as "the circle method of Hardy, Littlewood and Ramanujan, in the form of Vinogradov's trigonometric sums". Essentially all this does is to discard the whole 'tail' of the generating function, allowing the business of in the limiting operation to be set directly to the value 1. Applications Refinements of the method have allowed results to be proved about the solutions of homogeneous Diophantine equations, as long as the number of variables is large relative to the degree (see Birch's theorem for example). This turns out to be a contribution to the Hasse principle, capable of yielding quantitative information. If is fixed and is small, other methods are required, and indeed the Hasse principle tends to fail. Rademacher's contour In the special case when the circle method is applied to find the coefficients of a modular form of negative weight, Hans Rademacher found a modification of the contour that makes the series arising from the circle method converge to the exact result. To describe his contour, it is convenient to replace the unit circle by the upper half plane, by making the substitution , so that the contour integral becomes an integral from to . (The number could be replaced by any number on the upper half-plane, but is the most convenient choice.) Rademacher's contour is (more or less) given by the boundaries of all the Ford circles from 0 to 1, as shown in the diagram. The replacement of the line from to by the boundaries of these circles is a non-trivial limiting process, which can be justified for modular forms that have negative weight, and with more care can also be justified for non-constant terms for the case of weight 0 (in other words modular functions). Notes References Further reading External links Terence Tao, Heuristic limitations of the circle method, a blog post in 2012 Analytic number theory
Hardy–Ramanujan–Littlewood circle method
[ "Mathematics" ]
1,586
[ "Analytic number theory", "Number theory" ]
1,521,379
https://en.wikipedia.org/wiki/Oecus
Oecus is the Latinized form of Greek oikos, used by Vitruvius for the principal hall or salon in a Roman house, which was used occasionally as a triclinium for banquets. When of great size it became necessary to support its ceiling with columns; thus, according to Vitruvius, the tetrastyle oecus had four columns; in the Corinthian oecus there was a row of columns on each side, virtually therefore dividing the room into nave and aisles, the former being covered over with a barrel vault. The Egyptian oecus had a similar plan, but the aisles were of less height, so that clerestory windows were introduced to light the room, which, as Vitruvius states, presents more the appearance of a basilica than of a triclinium. Vitruvius distinguishes four types of oecus: Tetrastylos: with four columns; Corinthian: with a row of columns supporting an architrave topped with a cornice and a vaulted ceiling; Egyptian: particularly magnificent form of the oecus, with columns running all around, which support a gallery also provided with columns; Cycicene (κυζίκηνοι from Cyzicus, an ancient city in Mysia): a very spacious, north-facing garden oecus common among the Greeks. See also House of the Faun House of the Vettii References Ancient Roman architecture Rooms
Oecus
[ "Engineering" ]
305
[ "Rooms", "Architecture" ]
1,521,654
https://en.wikipedia.org/wiki/New%20Frontier%20Hotel%20and%20Casino
The New Frontier (formerly Hotel Last Frontier and The Frontier) was a hotel and casino on the Las Vegas Strip in Paradise, Nevada. The property began as a casino and dance club known as Pair O' Dice, opened in 1931. It was sold in 1941, and incorporated into the Hotel Last Frontier, which began construction at the end of the year. The Hotel Last Frontier opened on October 30, 1942, as the second resort on the Las Vegas Strip. The western-themed property included 105 rooms, as well as the Little Church of the West. The resort was devised by R.E. Griffith and designed by his nephew, William J. Moore. Following Griffith's death in 1943, Moore took over ownership and added a western village in 1948. The village consisted of authentic Old West buildings from a collector and would also feature the newly built Silver Slipper casino, added in 1950. Resort ownership changed several times between different groups, beginning in 1951. A modernized expansion opened on April 4, 1955, as the New Frontier. It operated concurrently with the Last Frontier. Both were closed in 1965 and demolished a year later to make way for a new resort, which opened as the Frontier on July 29, 1967. Future casino mogul Steve Wynn was among investors in the ownership group, marking his entry into the Las Vegas gaming industry. The ownership group also included several individuals who had difficulty gaining approval from Nevada gaming regulators. Businessman Howard Hughes bought out the group at the end of 1967. Like his other casino properties, he owned the Frontier through Hughes Tool Company, and later through Summa Corporation. In 1988, Summa sold the Frontier to Margaret Elardi, and her two sons became co-owners a year later. A 16-story hotel tower was added in 1990. The Elardi family declined to renew a contract with the Culinary Workers Union, and 550 workers went on strike on September 21, 1991. It became one of the longest strikes in U.S. history. Businessman Phil Ruffin eventually purchased the Frontier for $167 million. The sale was finalized on February 1, 1998, when Ruffin renamed the property back to the New Frontier. The strike ended on the same day, as Ruffin agreed to a union contract. Ruffin launched a $20 million renovation to update the aging property. His changes included the addition of a new restaurant, Gilley's Saloon. Over the next decade, Ruffin considered several redevelopment projects for the site, but lack of financing hindered these plans. In May 2007, he agreed to sell the New Frontier to El Ad Properties for more than $1.2 billion. The resort closed on July 16, 2007, and demolition began later that year. The 16-story tower was imploded on November 13, 2007. It was the last of the Hughes-era casinos to be demolished. The 984-room property had been popular as a low-budget alternative to the larger resorts on the Strip. El Ad owned the Plaza Hotel in New York City and planned to replace the New Frontier with a Plaza-branded resort, but the project was canceled due to the Great Recession. Crown Resorts also scrapped plans to build the Alon Las Vegas resort. The site was purchased by Wynn Resorts in 2018, although plans to build the Wynn West resort were also shelved, and the land remains vacant. The property hosted numerous entertainers throughout its operation, including Wayne Newton and Robert Goulet. It hosted the Las Vegas debuts of Liberace in 1944, and Elvis Presley in 1956, and also hosted the final performance of Diana Ross & The Supremes in 1970. History A portion of the property began as a casino and dance club known as Pair O' Dice. It opened on July 4, 1931, and was remodeled and enlarged during its first year. It was originally owned by casino dealer Frank Detra. Businessman Guy McAfee took over club operations in 1939. He remodeled the property and renamed it the 91 Club, after its location on Highway 91, which would later become the Las Vegas Strip. He purchased the club later in 1939, for $10,000. Hotel Last Frontier (1942–65) McAfee sold the 91 Club in late 1941, to a group based in Arizona. R.E. “Griff” Griffith, the brother of film director D.W. Griffith, and owner of a movie theater chain in the southwestern U.S., paid $1,000 per acre for the 35-acre site. In addition to theaters, Griffith also owned the El Rancho Hotel & Motel in Gallup, New Mexico, and planned to expand it into a hotel chain. Griffith had originally planned to build his next hotel in Deming, New Mexico, before traveling to Las Vegas and realizing that it presented better opportunities. He intended to construct a western-themed hotel-casino resort on the newly purchased land. However, his initial name for the project was already in use by the El Rancho Vegas, which opened in 1941 as the first resort on the Las Vegas Strip. Instead, Griffith named his property the Hotel Last Frontier, while maintaining the western theme. Griffith hired architect William J. Moore, his nephew, to design the project, with emphasis on an authentic recreation of the Old West. Construction began on December 8, 1941, taking place around the 91 Club, which was incorporated into the new project as the Leo Carrillo Bar. It was named after Griffith's friend, entertainer Leo Carrillo. Building materials were difficult to acquire, due to a supply shortage caused by World War II. Moore purchased one or two abandoned mines in Pioche, Nevada, and sent crews to strip the sites of any usable materials. Moore also purchased two ranches in Moapa, Nevada, to supply meat and dairy for the resort. The Hotel Last Frontier opened on October 30, 1942. It was the second hotel-casino resort to open on the Las Vegas Strip. The motel was mostly two stories, with some rooms on a third floor. It included 105 rooms at its opening, and an additional 100 would be added later. To maintain cool temperatures, cold water was carried through pipes in the walls of each room, originating from tunnels beneath the property. Because Griffith and Moore were inexperienced in the gaming industry, they had the casino built at the rear of the property, not realizing that it should have been presented as the main attraction. The property included the Gay Nineties Bar, which had sat in the Arizona Club in Las Vegas, before being reassembled at the Last Frontier. The Frontier added the Little Church of the West in May 1943. The resort also included the El Corral Arena, used for rodeo events. Griffith died of a heart attack in November 1943, and Moore took over the property. Moore conceived an idea to add the western-themed Last Frontier Village. It opened in November 1948, initially with three buildings while others would be added later. The village ultimately included restaurants, bars, and shops. The Little Church of the West was also incorporated into the village. Located at the property's northern end, the village included authentic Old West buildings saved by Doby Doc, a collector in Elko, Nevada. He served as curator of the attraction. The village also featured some newly built replicas created by the resort, including a Texaco gas station designed by Zick & Sharp. It offered free showers and restrooms to attract motorists to the resort. The Silver Slipper casino was added to the village in 1950. The Last Frontier was sold in 1951, to a group led by McAfee. The new ownership included Jake Kozloff and Beldon Katleman, the latter of whom also owned the El Rancho Vegas. By 1954, Kozloff was the primary stockholder, and the ownership group now included Murray Randolph. New Frontier (1955–65) In June 1954, construction began on a $2 million expansion known as the New Frontier. The project included more rooms, new restaurants, and additional casino space. The Little Church of the West was relocated elsewhere on the property to make room for the new facilities. Later that year, Katleman sued several resort executives, including Kozloff, his brother William Kozloff, and Randolph. Katleman alleged that the trio had undisclosed partners invested in the resort, going against state law. He also alleged that the men began expansion of the resort without first obtaining a loan to cover the costs. The Nevada Tax Commission launched an investigation into the resort's hidden ownership. An opening celebration for the New Frontier was held on April 4, 1955. It served as a modernized expansion of the Hotel Last Frontier, which continued to operate under its original name. Singer Mario Lanza was scheduled to perform for the opening, but canceled at the last minute due to laryngitis, forcing the property to refund $20,000 in tickets. Jake Kozloff resigned as president and general manager a few weeks after the opening. He and Randolph sold their interest to a new investor group, which finalized their purchase in May 1955, after paying more than $1 million to creditors. Katleman had sought to prevent the sale, as the resort was heavily mortgaged under the new group's financial setup. Katleman had also gotten into a fist fight with Maury Friedman, a member of the group who was denied ownership by the tax commission. Friedman was approved for an ownership stake later in 1955, along with seven other new partners in the group. Katleman's 1954 suit against Kozloff and Randolph was settled a few months later. An expansion project was announced later in 1955. The adjacent Royal Nevada hotel-casino, located north of the Frontier, was taken over by the latter's ownership group in 1956. The Royal Nevada then briefly served as an annex to the New Frontier. Later that year, a new group took over operations and invested $301,000 into the New Frontier, which was struggling financially. The group included Vera Krupp, the estranged wife of Alfried Krupp von Bohlen und Halbach. Krupp oversaw operations with Louis Manchon, a swimming pool contractor. The previous group, including Friedman, returned to take over operations in early March 1957, after Krupp declined to invest any further in the struggling resort. Krupp alleged that stockholders had misled her on the monetary potential of the New Frontier. The property owed approximately $100,000 to creditors, not including back taxes sought by the U.S. government. Federal agents seized more than $1 million in assets from the property, which closed its facilities on March 18, 1957, with the exception of the hotel. The New Frontier later went into bankruptcy. Restaurant and bar operations eventually resumed. In mid-1958, a new operating group – led by Los Angeles shirt manufacturer Jack Barenfield – proposed a $400,000 investment to reopen the casino and operate it on a limited basis. The Nevada Gaming Control Board was skeptical that the group would have enough funds to keep the casino operational for long. Warren Bayley, one of the primary owners of the Hacienda resort, reached a deal to take over the New Frontier from Katleman and Friedman. The $6.5 million deal was finalized on October 1, 1958. The property was leased to Bayley, who agreed to pay off its debts. Actor Preston Foster served as vice president for Frontier Properties, Inc. The casino area reopened in April 1959. Two years later, Idaho banker and construction company owner Frank Wester sought to take over the property. Wester was approved by state gaming regulators, but failed to follow through on the deal. The Frontier (1967–98) Bayley became the primary owner of the New Frontier Hotel in November 1964. He died a month later, and the casino was closed on New Year's Eve, in preparation for an expansion. The hotel and other facilities closed a few days later, and the property never reopened. Bankers Life purchased Frontier Properties Inc. in August 1965, and leased it to a new company, Vegas Frontier Inc., overseen by Friedman. Six months later, Friedman announced plans to demolish the existing facilities entirely for a larger Frontier resort to be built on the site. The demolition process reached its final stage in May 1966. The western village was included in the demolition, although the Little Church of the West and the Silver Slipper casino were kept. Groundbreaking for a new Frontier hotel-casino took place on September 26, 1966, with Friedman set to oversee casino operations. The new project had more than a dozen investors, including future casino mogul Steve Wynn, who purchased a three-percent stake. The Frontier marked Wynn's entry into the Las Vegas gaming industry. It was later discovered that the Frontier project was financed with Detroit mob money, from a group led by Anthony Joseph Zerilli. The $25 million Frontier opened on July 29, 1967, with a four-day celebration. It included 650 hotel rooms, entertainment venues, several restaurants, and convention space. The project was designed by Rissman & Rissman. The Frontier's roadside sign had a height of 184 feet, making it the tallest in Las Vegas. The sign, along with the Frontier's new "F" logo, was designed by Bill Clark of Ad Art. The sign featured 16-foot-tall letters, with the giant "F" logo resting at the top. Several individuals in the new property, including Friedman, had difficulty gaining approval of state gaming regulators. Businessman Howard Hughes bought out the group in December 1967, paying $23 million for the Frontier. Like his other casino properties, it was originally operated through Hughes Tool Company, until Hughes' Summa Corporation took over in 1973. Hughes died three years later. A $5 million renovation concluded in 1978. Later that year, the Little Church of the West was relocated to the Hacienda resort, making room for the Fashion Show Mall to be built just south of the Frontier. In December 1987, Summa agreed to sell the Frontier and Silver Slipper – the last of Hughes' Las Vegas gaming properties – to casino owner Margaret Elardi. She took over ownership of the Frontier on June 30, 1988, and acquired the Silver Slipper later that year, demolishing the latter to add a Frontier parking lot. In December 1989, Elardi's two sons, John and Tom, became part-owners with her in the Frontier. The 16-story Atrium Tower, consisting of 400 suites, was opened a month later. Under the Elardis' ownership, the Frontier focused primarily on a low-budget clientele of slot players. It offered few amenities, at a time when new megaresorts were becoming popular on the Las Vegas Strip. Strike The Frontier had a labor agreement with the Culinary Workers Union that expired on July 1, 1989. Upon its expiration, general manager Tom Elardi said that the union presented the Frontier with two contract renewal choices, with no option to negotiate; he said the family would not have purchased the Frontier if they had known this would happen. Citing a reduction in salaries and worker benefits, 550 workers went on strike on September 21, 1991. Politicians such as Jesse Jackson expressed support for the strikers, who represented four unions, including Culinary. The strike ran continuously on the sidewalk in front of the resort, and striking workers were occasionally violent towards patrons who crossed the picket line. In April 1993, California tourist Sean White and his family were verbally and physically assaulted by the strikers. Seven union workers were charged in the incident, and the union itself settled with the Whites after they filed a lawsuit. Sean White also sued the Frontier, seeking damages for his injuries and alleging inadequate security at the resort. He claimed that the property was aware of the strikers being particularly agitated on the night of the incident, yet did nothing to resolve the situation. The Frontier countered that the Whites provoked the strikers. Furthermore, Tom Elardi said that guests were always warned about possible verbal abuse from the strikers when making hotel reservations. He also said that, according to the National Labor Relations Board (NLRB), it would be illegal to label the strikers as "violent". In addition, Elardi said that Frontier security did not have the authority to help guests on public property, where the incident took place. A jury eventually ruled in the Frontier's favor, finding it not liable for events that take place on public property. In late 1991, the Frontier ran controversial ads in the Los Angeles Times implying that the entire Strip was being targeted by the strike. The property eventually stopped running the ads after protests from other resorts. Business at the Frontier saw a 40-percent decrease during the first year of the strike. In 1993, Nevada governor Bob Miller appointed a fact finder to help resolve the strike, although these efforts failed after 28 meetings. Miller later called the Frontier an embarrassment to the state for its refusal to end the strike. Margaret Elardi wanted to settle with the union and end the strike, but her sons opposed the idea. Numerous complaints against the Frontier were filed with the NLRB. In 1995, a federal court ruled that the resort had to pay back work-related benefits that it had cut off to striking workers. The NLRB later ruled in favor of the union, agreeing with the 1995 ruling and calling the dispute an unfair labor practice strike. Negotiations between the Culinary union and the Elardis took place in July 1996, but ended without a resolution, in part because Tom Elardi refused a Culinary mandate to rehire all of the striking workers: "I believe the ones who've been violent or who participated in major picket line misconduct shouldn't come back. The union says that's the only way they will settle, but I absolutely refuse to take them back". Arthur Goldberg, chairman of Bally Entertainment, announced in July 1996 that there was interest in purchasing the Frontier and ending the strike. At the time, Hilton Hotels Corporation was in the process of acquiring Bally. Goldberg was willing to purchase the Frontier himself if Hilton should pass on it. His plan would potentially include demolishing all or part of the Frontier to make way for a 3,000-room resort. Wynn and casino rival Donald Trump were also rumored to have an interest in buying the Frontier. Trump passed on the property, as he found Elardi's $208 million asking price too high. Hilton and Goldberg also did not proceed with a purchase, and the strike continued. Allegations In late 1996, a former Frontier worker alleged that the Elardis ran a technologically advanced spy operation to monitor the strike. It was also used to monitor Frontier security guards, as well as officers of the Las Vegas Metropolitan Police Department whenever they came to view video footage of the strike. The operation allegedly included security cameras and listening devices, operated from a second-floor headquarters known as the 900 Room that was overseen by 15 people. The worker also said that the resort routinely sabotaged the strike, for instance by turning on nearby sprinklers or placing manure bags near a catering truck. Tom Elardi called the worker disgruntled. He said the 900 Room functioned only to monitor and maintain the exterior during the strike, denying that any sabotage had taken place. Other former workers came forward to confirm the spying allegation, stating that there was a high level of paranoia relating to the strike. Some workers said that the Frontier had tapped its office phones to monitor conversations, allegations which led to an FBI investigation. Concerned that strikers might stay at the hotel to gain information, Frontier officials also had recording devices planted in certain guest rooms which were to be occupied only by confirmed members of the strike, allowing the hotel to spy on them. The spying operation allegedly went beyond the resort, as some workers said they were tasked with following strikers around. Others collected garbage from the Culinary headquarters in hopes of gaining incriminating information. After the allegations came to light, strikers filed 75 criminal complaints against the Frontier, and the Nevada Gaming Control Board opened an investigation. Meanwhile, the AFL–CIO launched a campaign to raise awareness about the strike, with president John Sweeney calling the Frontier "one of the biggest corporate criminals" in American history. The AFL-CIO also opened a committee investigation into the strike. John Elardi later admitted that the 900 Room was used for spying, stating that he created it in 1992, without first consulting Margaret or Tom Elardi. He also acknowledged using sprinklers on the strikers, after police stopped responding to the resort's calls about trespassing picketers. Resolution In October 1997, businessman Phil Ruffin reached an agreement to buy the Frontier from the Elardis for $167 million. He also agreed to sign a contract with the union, putting an end to the strike. Ruffin's application for a gaming license was fast-tracked to expedite the sale and end the strike sooner. Prior to the announcement of Ruffin's purchase, the Nevada Gaming Control Board was prepared to file a complaint revoking the Frontier's gaming license, due to the property's conduct during the strike. Ruffin completed his purchase on February 1, 1998, ending the 2,325-day strike. It was among the longest strikes in U.S. history, and the Culinary union had spent $26 million on it. Approximately 300 of the 550 striking workers returned to their jobs. Striking employees received a total of nearly $5 million in back-pay and trust fund contributions. On the day of the purchase, a celebration event was held at the resort, and was attended by 3,000 people. New Frontier (1998–2007) Upon taking ownership, Ruffin renamed the property back to the New Frontier. It had 986 rooms and a casino, and catered to a middle-class clientele. The resort had become outdated during the strike, and lacked basic features such as fulltime room service and a 24-hour coffee shop. Profits improved following a $20 million renovation project, which included new restaurants and a remodeled sportsbook. Gilley's Saloon, a country western restaurant, was among the additions. It included a mechanical bull, a dance hall, and live music. The saloon opened in December 1998. Ruffin got the idea for the restaurant after seeing the 1980 film Urban Cowboy, which had featured the Gilley's Club in Texas, along with its mechanical bull. Ruffin subsequently partnered with country singer Mickey Gilley to open the saloon, inspired by the original club. Gilley's later offered bikini bull-riding and mud wrestling. Ruffin intended to rebrand the hotel as a Radisson, and renovated the guest rooms to bring them up to standard. However, in 1999, he decided against this idea as he now had other plans for the property. In January 2000, Ruffin announced plans to demolish the New Frontier in five or six months to make way for a new casino resort, scheduled to open in 2002. The new project, known as City by the Bay, would include a San Francisco theme and more than 2,500 rooms. Ruffin said the new resort was necessary to stay competitive on the Las Vegas Strip. The project would cost up to $700 million. He put his redevelopment plans on hold in May 2000, because of difficulty raising the necessary funds. Ruffin said the project would eventually proceed. The New Frontier continued operations in the meantime, and remained profitable. In 2002, Ruffin partnered with Trump to build Trump International Hotel Las Vegas. It was constructed on the Frontier property's southwest corner, taking up part of a rear parking lot. Meanwhile, Ruffin still had difficulty acquiring funds to build City by the Bay, and his plans evolved several times over the years. At one point, Ruffin considered a Trump-branded resort to replace the New Frontier. In 2003, Ruffin was in discussions with several casino operators about a possible joint venture for a new resort on the Frontier site. At the end of 2004, he said he would redevelop the New Frontier site on his own, stating that he had turned down a dozen offers from potential partners. By 2006, Ruffin's unnamed resort project was planned to include a 485-foot Ferris wheel. Later that year, Ruffin announced that the new casino resort would be named Montreux, after the Swiss town of the same name. The $2 billion resort would include 2,750 rooms. However, by March 2007, Ruffin was in negotiations to sell the New Frontier to El Ad Properties, which owned the Plaza Hotel in New York City. A sale agreement was announced two months later, with El Ad paying approximately $35 million per acre for the 35-acre site. At more than $1.2 billion, it was the most expensive real estate transaction on the Strip. El Ad planned to demolish the New Frontier and build a $5 billion Plaza-branded resort in its place. The New Frontier closed on July 16, 2007, at 12:01 a.m. The closing was a low-key event. At the time, the New Frontier operated the last remaining bingo room on the Strip, and was one of the few remaining casinos to still use coin-operated slot machines. El Ad completed its purchase three weeks after the closure. The 984-room New Frontier had remained popular as a low-budget alternative to larger resorts nearby. However, it lacked the same popularity as previous resorts such as the Sands, Stardust, and Desert Inn. In 2006, readers of the Las Vegas Review-Journal voted it "Hotel Most Deserving of Being Imploded". Wynn, who now owned the Wynn Las Vegas resort across the street, called the aging Frontier "the single biggest toilet in Las Vegas". The New Frontier was the last of the Hughes-era casinos to be demolished. After a five-minute fireworks show, the 16-story Atrium Tower was imploded on November 13, 2007, at 2:37 a.m. to the thousands of spectators that turned out to view the demolition. The tower was imploded by Controlled Demolition, Inc., which had worked on other Las Vegas hotel implosions. The interior was stripped down allowing for the insertion of dynamite, totaling 1,040 pounds and spread across 6,200 different areas of the tower. The implosion left a four-story pile of concrete, glass and steel remains. Two low-rise hotel wings were demolished with the use of an excavator, although the discovery of asbestos slowed the process down. The roadside sign was left up until December 2008, when Wynn requested that it be taken down ahead of the opening for Encore Las Vegas, an addition to his Wynn property. The city's Neon Museum sought to save portions of the sign. Redevelopment proposals Following the closure of the New Frontier, there had been multiple redevelopment proposals. The Plaza project failed to materialize, due to financial problems brought on by the Great Recession. Wynn offered to beautify the vacant site with landscaping, and was also approached by El Ad several times to take over the land and develop it. However, he declined as he considered such a project too much of a financial risk. Wynn blamed what he saw as anti-business policies of U.S. president Barack Obama, and a challenging level of debt as a consequence of El Ad having paid what proved too high a price for the property. In 2014, Crown Resorts purchased the property for $280 million and partnered with Oaktree Capital Management. A year later, they announced plans to build a casino resort known as Alon Las Vegas. However, Crown Resorts pulled out of the project in 2016, and it was eventually canceled. Wynn Resorts bought the land and four adjacent acres in early 2018, for $336 million. The company announced plans to build Wynn West, a new casino resort to complement the existing Wynn and Encore properties. Steve Wynn, amid sexual assault allegations against him, resigned from his company shortly after the announcement. Matt Maddox took over as CEO, and plans for Wynn West were shelved. In 2024, the county extended permits for the site, giving Wynn until April 2026 to begin construction on an unnamed resort expansion. The project would include additional casino space and a hotel tower with 1,100 rooms. Entertainment The Hotel Last Frontier opened with an entertainment venue known as the Ramona Room. Liberace made his Las Vegas debut at the showroom in 1944. The Mary Kaye Trio performed at the Hotel Last Frontier for approximately three years, starting in 1950. The Ramona Room had already been booked by other acts over the next six months, so a stage was added to a bar area for the trio to perform. They became the first lounge act to perform in Las Vegas, popularizing the concept. The New Frontier addition in 1955 included a restaurant and showroom known as the Venus Room. A new Venus Room, with seating for 800, opened with the rebuilt Frontier in 1967. The new resort also included the 400-seat Post Time Theater. Elvis Presley made his Las Vegas debut at the New Frontier in 1956, but was poorly received. In the late 1950s, the New Frontier offered Holiday in Japan, a variety show featuring 60 performers from Tokyo. Ronald Reagan entertained at the resort in the 1950s, as did Wayne Newton in the 1960s and 1970s. Other entertainers included Robert Goulet, Jimmy Durante, George Carlin, Ray Anthony, and Phil Harris. Diana Ross & The Supremes gave their final performance in 1970, at the Frontier. Their performance was recorded for the album Farewell. In the early 1970s, the Frontier hosted the Miss Rodeo America pageant. Siegfried & Roy performed in Beyond Belief, a magic show that opened in 1981. It ran for 3,538 performances over a period of nearly seven years. When the Elardi family took over ownership in the late 1980s, they closed the showroom. After years without live entertainment, Ruffin added a 284-seat venue in 2000. One new show, Legends of Comedy, featured entertainers who impersonated comedians such as Rodney Dangerfield, Jay Leno, and Roseanne Barr. In 2001, the New Frontier launched Rock 'n' Roll Legends, featuring impersonator singers. Numerous other shows ran at the resort in the 2000s, including a magic act, the Thunder From Down Under male revue, and a Frank Sinatra tribute show. Female impersonator Kenny Kerr also had a musical dance show at the property. References External links Official website, archived via the Wayback Machine New Frontier Implosion Video—the implosion starts at 1:50 New Frontier photo from November 3, 2007 Las Vegas Casino Demolition: Blowdown Documentary 1942 establishments in Nevada 1967 establishments in Nevada 2007 disestablishments in Nevada Buildings and structures demolished by controlled implosion Buildings and structures demolished in 2007 Casino hotels Casinos completed in 1942 Casinos completed in 1967 Casinos in Paradise, Nevada Defunct casinos in the Las Vegas Valley Defunct hotels in the Las Vegas Valley Demolished hotels in Clark County, Nevada Hotel buildings completed in 1942 Hotel buildings completed in 1967 Hotels established in 1942 Hotels established in 1967 Landmarks in Nevada Las Vegas Strip
New Frontier Hotel and Casino
[ "Engineering" ]
6,322
[ "Buildings and structures demolished by controlled implosion", "Architecture" ]
1,521,726
https://en.wikipedia.org/wiki/Superquadrics
In mathematics, the superquadrics or super-quadrics (also superquadratics) are a family of geometric shapes defined by formulas that resemble those of ellipsoids and other quadrics, except that the squaring operations are replaced by arbitrary powers. They can be seen as the three-dimensional relatives of the superellipses. The term may refer to the solid object or to its surface, depending on the context. The equations below specify the surface; the solid is specified by replacing the equality signs by less-than-or-equal signs. The superquadrics include many shapes that resemble cubes, octahedra, cylinders, lozenges and spindles, with rounded or sharp corners. Because of their flexibility and relative simplicity, they are popular geometric modeling tools, especially in computer graphics. It becomes an important geometric primitive widely used in computer vision, robotics, and physical simulation. Some authors, such as Alan Barr, define "superquadrics" as including both the superellipsoids and the supertoroids. In modern computer vision literatures, superquadrics and superellipsoids are used interchangeably, since superellipsoids are the most representative and widely utilized shape among all the superquadrics. Comprehensive coverage of geometrical properties of superquadrics and methods of their recovery from range images and point clouds are covered in several computer vision literatures. Formulas Implicit equation The surface of the basic superquadric is given by where r, s, and t are positive real numbers that determine the main features of the superquadric. Namely: less than 1: a pointy octahedron modified to have concave faces and sharp edges. exactly 1: a regular octahedron. between 1 and 2: an octahedron modified to have convex faces, blunt edges and blunt corners. exactly 2: a sphere greater than 2: a cube modified to have rounded edges and corners. infinite (in the limit): a cube Each exponent can be varied independently to obtain combined shapes. For example, if r=s=2, and t=4, one obtains a solid of revolution which resembles an ellipsoid with round cross-section but flattened ends. This formula is a special case of the superellipsoid's formula if (and only if) r = s. If any exponent is allowed to be negative, the shape extends to infinity. Such shapes are sometimes called super-hyperboloids. The basic shape above spans from -1 to +1 along each coordinate axis. The general superquadric is the result of scaling this basic shape by different amounts A, B, C along each axis. Its general equation is Parametric description Parametric equations in terms of surface parameters u and v (equivalent to longitude and latitude if m equals 2) are where the auxiliary functions are and the sign function sgn(x) is Spherical product Barr introduces the spherical product which given two plane curves produces a 3D surface. If are two plane curves then the spherical product is This is similar to the typical parametric equation of a sphere: which give rise to the name spherical product. Barr uses the spherical product to define quadric surfaces, like ellipsoids, and hyperboloids as well as the torus, superellipsoid, superquadric hyperboloids of one and two sheets, and supertoroids. Plotting code The following GNU Octave code generates a mesh approximation of a superquadric: function superquadric(epsilon,a) n = 50; etamax = pi/2; etamin = -pi/2; wmax = pi; wmin = -pi; deta = (etamax-etamin)/n; dw = (wmax-wmin)/n; [i,j] = meshgrid(1:n+1,1:n+1) eta = etamin + (i-1) * deta; w = wmin + (j-1) * dw; x = a(1) .* sign(cos(eta)) .* abs(cos(eta)).^epsilon(1) .* sign(cos(w)) .* abs(cos(w)).^epsilon(1); y = a(2) .* sign(cos(eta)) .* abs(cos(eta)).^epsilon(2) .* sign(sin(w)) .* abs(sin(w)).^epsilon(2); z = a(3) .* sign(sin(eta)) .* abs(sin(eta)).^epsilon(3); mesh(x,y,z); end See also Superegg Superellipsoid Ellipsoid References External links Bibliography: SuperQuadric Representations Superquadric Tensor Glyphs SuperQuadric Ellipsoids and Toroids, OpenGL Lighting, and Timing Superquadrics by Robert Kragler, The Wolfram Demonstrations Project. Superquadrics in Python Superquadrics recovery algorithm in Python and MATLAB Computer graphics Computer vision Geometry Geometry in computer vision Robotics engineering
Superquadrics
[ "Mathematics", "Technology", "Engineering" ]
1,087
[ "Computer engineering", "Robotics engineering", "Packaging machinery", "Geometry", "Artificial intelligence engineering", "Geometry in computer vision", "Computer vision" ]
1,521,879
https://en.wikipedia.org/wiki/Caraval
The caraval (also called a cara-serval) is the hybrid cross between a male caracal and a female serval. They have a spotted pattern similar to the serval, but on a darker background. A servical is the cross between a male serval and a female caracal. A litter of servicals occurred by accident when the two animals were kept in the same enclosure at Los Angeles Zoo. The hybrids were given to an animal shelter. The only photos show them as tawny kittens. Several serval - caracal offspring have been born in recent years through a breeding program in order to create F1 hybrids. So far, no F1 hybrids of this cross breed are in existence. References Felid hybrids Intergeneric hybrids
Caraval
[ "Biology" ]
154
[ "Intergeneric hybrids", "Hybrid organisms" ]
1,521,971
https://en.wikipedia.org/wiki/Ptolemy%27s%20theorem
In Euclidean geometry, Ptolemy's theorem is a relation between the four sides and two diagonals of a cyclic quadrilateral (a quadrilateral whose vertices lie on a common circle). The theorem is named after the Greek astronomer and mathematician Ptolemy (Claudius Ptolemaeus). Ptolemy used the theorem as an aid to creating his table of chords, a trigonometric table that he applied to astronomy. If the vertices of the cyclic quadrilateral are A, B, C, and D in order, then the theorem states that: This relation may be verbally expressed as follows: If a quadrilateral is cyclic then the product of the lengths of its diagonals is equal to the sum of the products of the lengths of the pairs of opposite sides. Moreover, the converse of Ptolemy's theorem is also true: In a quadrilateral, if the sum of the products of the lengths of its two pairs of opposite sides is equal to the product of the lengths of its diagonals, then the quadrilateral can be inscribed in a circle i.e. it is a cyclic quadrilateral. Corollaries on inscribed polygons Equilateral triangle Ptolemy's Theorem yields as a corollary a pretty theorem regarding an equilateral triangle inscribed in a circle. Given An equilateral triangle inscribed on a circle and a point on the circle. The distance from the point to the most distant vertex of the triangle is the sum of the distances from the point to the two nearer vertices. Proof: Follows immediately from Ptolemy's theorem: Square Any square can be inscribed in a circle whose center is the center of the square. If the common length of its four sides is equal to then the length of the diagonal is equal to according to the Pythagorean theorem, and Ptolemy's relation obviously holds. Rectangle More generally, if the quadrilateral is a rectangle with sides a and b and diagonal d then Ptolemy's theorem reduces to the Pythagorean theorem. In this case the center of the circle coincides with the point of intersection of the diagonals. The product of the diagonals is then d2, the right hand side of Ptolemy's relation is the sum a2 + b2. Copernicus – who used Ptolemy's theorem extensively in his trigonometrical work – refers to this result as a 'Porism' or self-evident corollary: Furthermore it is clear (manifestum est) that when the chord subtending an arc has been given, that chord too can be found which subtends the rest of the semicircle. Pentagon A more interesting example is the relation between the length a of the side and the (common) length b of the 5 chords in a regular pentagon. By completing the square, the relation yields the golden ratio: Side of decagon If now diameter AF is drawn bisecting DC so that DF and CF are sides c of an inscribed decagon, Ptolemy's Theorem can again be applied – this time to cyclic quadrilateral ADFC with diameter d as one of its diagonals: where is the golden ratio. whence the side of the inscribed decagon is obtained in terms of the circle diameter. Pythagoras's theorem applied to right triangle AFD then yields "b" in terms of the diameter and "a" the side of the pentagon is thereafter calculated as As Copernicus (following Ptolemy) wrote, "The diameter of a circle being given, the sides of the triangle, tetragon, pentagon, hexagon and decagon, which the same circle circumscribes, are also given." Proofs Visual proof The animation here shows a visual demonstration of Ptolemy's theorem, based on Derrick & Herstein (2012). Proof by similarity of triangles Let ABCD be a cyclic quadrilateral. On the chord BC, the inscribed angles ∠BAC = ∠BDC, and on AB, ∠ADB = ∠ACB. Construct K on AC such that ∠ABK = ∠CBD; since ∠ABK + ∠CBK = ∠ABC = ∠CBD + ∠ABD, ∠CBK = ∠ABD. Now, by common angles △ABK is similar to △DBC, and likewise △ABD is similar to △KBC. Thus AK/AB = CD/BD, and CK/BC = DA/BD; equivalently, AK⋅BD = AB⋅CD, and CK⋅BD = BC⋅DA. By adding two equalities we have AK⋅BD + CK⋅BD = AB⋅CD + BC⋅DA, and factorizing this gives (AK+CK)·BD = AB⋅CD + BC⋅DA. But AK+CK = AC, so AC⋅BD = AB⋅CD + BC⋅DA, Q.E.D. The proof as written is only valid for simple cyclic quadrilaterals. If the quadrilateral is self-crossing then K will be located outside the line segment AC. But in this case, AK−CK = ±AC, giving the expected result. Proof by trigonometric identities Let the inscribed angles subtended by , and be, respectively, , and , and the radius of the circle be , then we have , , , , and , and the original equality to be proved is transformed to from which the factor has disappeared by dividing both sides of the equation by it. Now by using the sum formulae, and , it is trivial to show that both sides of the above equation are equal to Q.E.D. Here is another, perhaps more transparent, proof using rudimentary trigonometry. Define a new quadrilateral inscribed in the same circle, where are the same as in , and located at a new point on the same circle, defined by , . (Picture triangle flipped, so that vertex moves to vertex and vertex moves to vertex . Vertex will now be located at a new point D’ on the circle.) Then, has the same edges lengths, and consequently the same inscribed angles subtended by the corresponding edges, as , only in a different order. That is, , and , for, respectively, and . Also, and have the same area. Then, Q.E.D. Proof by inversion Choose an auxiliary circle of radius centered at D with respect to which the circumcircle of ABCD is inverted into a line (see figure). Then Then and can be expressed as , and respectively. Multiplying each term by and using yields Ptolemy's equality. Q.E.D. Note that if the quadrilateral is not cyclic then A', B' and C' form a triangle and hence A'B'+B'C' > A'C', giving us a very simple proof of Ptolemy's Inequality which is presented below. Proof using complex numbers Embed ABCD in the complex plane by identifying as four distinct complex numbers . Define the cross-ratio . Then with equality if and only if the cross-ratio is a positive real number. This proves Ptolemy's inequality generally, as it remains only to show that lie consecutively arranged on a circle (possibly of infinite radius, i.e. a line) in if and only if . From the polar form of a complex number , it follows with the last equality holding if and only if ABCD is cyclic, since a quadrilateral is cyclic if and only if opposite angles sum to . Q.E.D. Note that this proof is equivalently made by observing that the cyclicity of ABCD, i.e. the supplementarity and , is equivalent to the condition ; in particular there is a rotation of in which this is 0 (i.e. all three products are positive real numbers), and by which Ptolemy's theorem is then directly established from the simple algebraic identity Corollaries In the case of a circle of unit diameter the sides of any cyclic quadrilateral ABCD are numerically equal to the sines of the angles and which they subtend. Similarly the diagonals are equal to the sine of the sum of whichever pair of angles they subtend. We may then write Ptolemy's Theorem in the following trigonometric form: Applying certain conditions to the subtended angles and it is possible to derive a number of important corollaries using the above as our starting point. In what follows it is important to bear in mind that the sum of angles . Corollary 1. Pythagoras's theorem Let and . Then (since opposite angles of a cyclic quadrilateral are supplementary). Then: Corollary 2. The law of cosines Let . The rectangle of corollary 1 is now a symmetrical trapezium with equal diagonals and a pair of equal sides. The parallel sides differ in length by units where: It will be easier in this case to revert to the standard statement of Ptolemy's theorem: The cosine rule for triangle ABC. Corollary 3. Compound angle sine (+) Let Then Therefore, Formula for compound angle sine (+). Corollary 4. Compound angle sine (−) Let . Then . Hence, Formula for compound angle sine (−). This derivation corresponds to the Third Theorem as chronicled by Copernicus following Ptolemy in Almagest. In particular if the sides of a pentagon (subtending 36° at the circumference) and of a hexagon (subtending 30° at the circumference) are given, a chord subtending 6° may be calculated. This was a critical step in the ancient method of calculating tables of chords. Corollary 5. Compound angle cosine (+) This corollary is the core of the Fifth Theorem as chronicled by Copernicus following Ptolemy in Almagest. Let . Then . Hence Formula for compound angle cosine (+) Despite lacking the dexterity of our modern trigonometric notation, it should be clear from the above corollaries that in Ptolemy's theorem (or more simply the Second Theorem) the ancient world had at its disposal an extremely flexible and powerful trigonometric tool which enabled the cognoscenti of those times to draw up accurate tables of chords (corresponding to tables of sines) and to use these in their attempts to understand and map the cosmos as they saw it. Since tables of chords were drawn up by Hipparchus three centuries before Ptolemy, we must assume he knew of the 'Second Theorem' and its derivatives. Following the trail of ancient astronomers, history records the star catalogue of Timocharis of Alexandria. If, as seems likely, the compilation of such catalogues required an understanding of the 'Second Theorem' then the true origins of the latter disappear thereafter into the mists of antiquity but it cannot be unreasonable to presume that the astronomers, architects and construction engineers of ancient Egypt may have had some knowledge of it. Ptolemy's inequality The equation in Ptolemy's theorem is never true with non-cyclic quadrilaterals. Ptolemy's inequality is an extension of this fact, and it is a more general form of Ptolemy's theorem. It states that, given a quadrilateral ABCD, then where equality holds if and only if the quadrilateral is cyclic. This special case is equivalent to Ptolemy's theorem. Related theorem about the ratio of the diagonals Ptolemy's theorem gives the product of the diagonals (of a cyclic quadrilateral) knowing the sides, the following theorem yields the same for the ratio of the diagonals. Proof: It is known that the area of a triangle inscribed in a circle of radius is: Writing the area of the quadrilateral as sum of two triangles sharing the same circumscribing circle, we obtain two relations for each decomposition. Equating, we obtain the announced formula. Consequence: Knowing both the product and the ratio of the diagonals, we deduce their immediate expressions: See also Casey's theorem Greek mathematics Notes References Coxeter, H. S. M. and S. L. Greitzer (1967) "Ptolemy's Theorem and its Extensions." §2.6 in Geometry Revisited, Mathematical Association of America pp. 42–43. Copernicus (1543) De Revolutionibus Orbium Coelestium, English translation found in On the Shoulders of Giants (2002) edited by Stephen Hawking, Penguin Books Amarasinghe, G. W. I. S. (2013) A Concise Elementary Proof for the Ptolemy's Theorem, Global Journal of Advanced Research on Classical and Modern Geometries (GJARCMG) 2(1): 20–25 (pdf). External links Proof of Ptolemy's Theorem for Cyclic Quadrilateral MathPages – On Ptolemy's Theorem Ptolemy's Theorem at cut-the-knot Compound angle proof at cut-the-knot Ptolemy's Theorem on PlanetMath Ptolemy Inequality on MathWorld De Revolutionibus Orbium Coelestium at Harvard. Deep Secrets: The Great Pyramid, the Golden Ratio and the Royal Cubit Ptolemy's Theorem by Jay Warendorff, The Wolfram Demonstrations Project. Book XIII of Euclid's Elements A Miraculous Proof (Ptolemy's Theorem) by Zvezdelina Stankova, on Numberphile. Theorems about quadrilaterals and circles Theorem Articles containing proofs Euclidean plane geometry
Ptolemy's theorem
[ "Mathematics" ]
2,775
[ "Articles containing proofs", "Planes (geometry)", "Euclidean plane geometry" ]
1,522,020
https://en.wikipedia.org/wiki/HALCA
HALCA (Highly Advanced Laboratory for Communications and Astronomy), also known for its project name VSOP (VLBI Space Observatory Programme), the code name MUSES-B (for the second of the Mu Space Engineering Spacecraft series), or just Haruka () was a Japanese 8 meter diameter radio telescope satellite which was used for Very Long Baseline Interferometry (VLBI). It was the first such space-borne dedicated VLBI mission. History It was placed in a highly elliptical orbit with an apogee altitude of 21,400 km and a perigee altitude of 560 km, with an orbital period of approximately 6.3 hours. This orbit allowed imaging of celestial radio sources by the satellite in conjunction with an array of ground-based radio telescopes, such that both good (u,v) plane coverage and very high resolution were obtained. Although designed to observe in three frequency bands: 1.6 GHz, 5.0 GHz, and 22 GHz, it was found that the sensitivity of the 22 GHz band had severely degraded after orbital deployment, probably caused by vibrational deformation of the dish shape at launch, thus limiting observations to the 1.6 GHz and 5.0 GHz bands. HALCA was launched in February 1997 from Kagoshima Space Center, and made its final VSOP observations in October 2003, far exceeding its 3-year predicted lifespan, before the loss of attitude control. All operations were officially ended in November 2005. A follow-up mission ASTRO-G (VSOP-2) was planned, with a proposed launch date of 2012, but the project was eventually cancelled in 2011 due to increasing costs and the difficulties of achieving its science goals. It was expected to achieve resolutions up to ten times higher and up to ten times greater sensitivity than its predecessor HALCA. The cancellation of ASTRO-G left the Russian Spektr-R mission as the only then operational space VLBI facility. Spektr-R stopped operating in 2019. Antenna The large 8 meter antenna was designed to unfold in space as the unfolded configuration did not fit inside the rocket fairing. The antenna was a metal mesh of 6000 cables. To form an ideal shape the length of the cables were adjusted on the backside of the antenna. One concern was that the cables could entangle. The deployment of the main reflector started on February 27, 1997. The deployment was done over three hours on the first day and was completed in 20 minutes during the next day. Highlights Observations of hydroxyl masers and pulsars at 1.6 GHz Detection of interference fringes for quasar PKS1519-273 between HALCA and terrestrial radio telescopes Routines imaging of quasars and radio galaxies etc. by means of experimental VLBI observations with HALCA and terrestrial radio telescope networks Gallery References External links HALCA VSOP Radio telescopes Space telescopes Satellites of Japan Spacecraft launched in 1997
HALCA
[ "Astronomy" ]
592
[ "Space telescopes" ]
1,522,271
https://en.wikipedia.org/wiki/Epsilon%20Canis%20Majoris
Epsilon Canis Majoris is a binary star system and the second-brightest star in the constellation of Canis Major. Its name is a Bayer designation that is Latinised from ε Canis Majoris, and abbreviated Epsilon CMa or ε CMa. This is the 22nd-brightest star in the night sky with an apparent magnitude of 1.50. About 4.7 million years ago, it was the brightest star in the night sky, with an apparent magnitude of −3.99. Based upon parallax measurements obtained during the Hipparcos mission, it is about 405 light-years distant. The two components are designated ε Canis Majoris A, officially named Adhara – the traditional name of the system, and B. Nomenclature ε Canis Majoris (Latinised to Epsilon Canis Majoris) is the binary system's Bayer designation. The designations of the two components as ε Canis Majoris A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU). ε Canis Majoris bore the traditional name Adhara (sometimes spelled Adara, Adard, Udara or Udra), derived from the Arabic word عذارى 'aðāra', "virgins". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire star systems. It approved the name Adhara for the star ε Canis Majoris A on 21 August 2016 and it is now so included in the List of IAU-approved Star Names. In the 17th-century catalogue of stars in the Calendarium of Al Achsasi al Mouakket, this star was designated Aoul al Adzari (أول العذاري awwal al-adhara), which was translated into Latin as Prima Virginum, meaning First of the Virgins. Along with δ Canis Majoris (Wezen), η Canis Majoris (Aludra) and ο2 Canis Majoris (Thanih al Adzari), these stars were Al ʽAdhārā (العذاري), 'the Virgins'. In Chinese, (), meaning Bow and Arrow, refers to an asterism consisting of ε Canis Majoris, δ Canis Majoris, η Canis Majoris, κ Canis Majoris, ο Puppis, π Puppis, χ Puppis, c Puppis and k Puppis. Consequently, ε Canis Majoris itself is known as (, ). Physical properties ε Canis Majoris is a binary star. The primary, ε Canis Majoris A, has an apparent magnitude of +1.5 and belongs to the spectral classification B2. Its color is blue or blueish-white, due to the surface temperature of . It emits a total radiation equal to 38,700 times that of the Sun. This star is the brightest source of extreme ultraviolet in the night sky. It is the strongest source of photons capable of ionizing hydrogen atoms in interstellar gas near the Sun, and is very important in determining the ionization state of the Local Interstellar Cloud. Its rotation period is estimated to be about 5 days. The exact evolutionary status of is uncertain. Spectroscopically it has been given the class B2 II, with the luminosity class of II indicating that is a bright giant, more luminous than a typical giant (luminosity class III). However, it appear less luminous than the expected for this luminosity class, and is more likely of class B2 III-II. Two studies suggest is still in the late main sequence (TAMS), rather than being a giant. One of these even suggested it could be the final product of a stellar merger. The +7.5-magnitude (the absolute magnitude amounts to +1.9) companion star, ε Canis Majoris B, is away with a position angle of 161° of the main star. Despite the relatively large angular distance the components can only be resolved in large telescopes, since the primary is approximately 250 times brighter than its companion. A few million years ago, ε Canis Majoris was much closer to the Sun than it is at present, causing it to be a much brighter star in the night sky. About 4.4 million years ago, Adhara was light-years from the Sun, and was the brightest star in the sky with a magnitude of . The values adoptedNo other star has attained this brightness since, nor will any other star attain this brightness for at least five million years. In culture USS Adhara (AK-71) was a U.S. Navy Crater-class cargo ship named after the star. ε Canis Majoris appears on the national flag of Brazil, symbolising the state of Tocantins. Notes References B-type bright giants Binary stars Canis Major Canis Majoris, Epsilon 2618 CD-28 03666 Canis Majoris, 21 052089 033579 Adhara
Epsilon Canis Majoris
[ "Astronomy" ]
1,070
[ "Canis Major", "Constellations" ]
1,522,281
https://en.wikipedia.org/wiki/Gacrux
Gacrux is the third-brightest star in the southern constellation of Crux, the Southern Cross. It has the Bayer designation Gamma Crucis, which is Latinised from γ Crucis and abbreviated Gamma Cru or γ Cru. With an apparent visual magnitude of +1.63, it is the 26th brightest star in the night sky. A line from the two "Pointers", Alpha Centauri through Beta Centauri, leads to within 1° north of this star. Using parallax measurements made during the Hipparcos mission, it is located at a distance of from the Sun. It is the nearest M-type red giant star to the Sun. Nomenclature γ Crucis (Latinised to Gamma Crucis) is the star's Bayer designation. Gacrux is currently at roughly 60° south declination. It was known and visible to the ancient Greeks and Romans as it was visible north of 40° latitude because of the precession of equinoxes. Oddly, it lacked a traditional name. The astronomer Ptolemy counted it as part of the constellation of Centaurus. The historical name Gacrux was coined by astronomer Elijah Hinsdale Burritt (1794-1838). In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Gacrux for this star. In Chinese astronomy, Gamma Crucis was known as (, .). The people of Aranda and Luritja tribe around Hermannsburg, Central Australia named Iritjinga, "The Eagle-hawk", a quadrangular arrangement comprising Gacrux, Delta Crucis (Imai), Gamma Centauri (Muhilfain) and Delta Centauri (Ma Wei). Among Portuguese-speaking peoples, especially in Brazil, it is also named Rubídea (or Ruby-like), in reference to its colour. Physical properties Gacrux has the MK system stellar classification of M3.5 III. It has evolved off of the main sequence to become a red giant star, but is most likely on the red giant branch rather than the asymptotic giant branch. Although only 50% more massive than the Sun, at this stage the star has expanded to 73 times the Sun's radius. It is radiating roughly 830 times the luminosity of the Sun from its expanded outer envelope. With an effective temperature of 3,689 K, the colour of Gacrux is a prominent reddish-orange, well in keeping with its spectral classification. It is a semi-regular variable with multiple periods. (See table at left.) The atmosphere of this star is enriched with barium, which is usually explained by the transfer of material from a more evolved companion. Typically this companion will subsequently become a white dwarf. However, no such companion has yet been detected. A +6.4 magnitude companion star lies about 2 arcminutes away at a position angle of 128° from the main star, and can be observed with binoculars. But it is only an optical companion, which is about 400 light years distant from Earth. In culture Gacrux is represented in the flags of Australia, New Zealand, Samoa, and Papua New Guinea as one of five stars (four in the case of New Zealand) that compose the Southern Cross. It is also featured on the flag of Brazil, along with 26 other stars, each of which represents a state. Gacrux represents the State of Bahia. The position of the line passing through Gacrux and Acrux marks the local meridian of the sky observed from Rio de Janeiro, at 8:30 am on 15 November 1889, the time when the republic was formally ratified. See also Aldebaran Betelgeuse References External links M-type giants Barium stars Semiregular variable stars Crux Crucis, Gamma 4763 CD-56 04504 0470 108903 061084 Gacrux
Gacrux
[ "Astronomy" ]
855
[ "Crux", "Constellations" ]
1,522,286
https://en.wikipedia.org/wiki/Schur%27s%20theorem
In discrete mathematics, Schur's theorem is any of several theorems of the mathematician Issai Schur. In differential geometry, Schur's theorem is a theorem of Axel Schur. In functional analysis, Schur's theorem is often called Schur's property, also due to Issai Schur. Ramsey theory In Ramsey theory, Schur's theorem states that for any partition of the positive integers into a finite number of parts, one of the parts contains three integers x, y, z with For every positive integer c, S(c) denotes the smallest number S such that for every partition of the integers into c parts, one of the parts contains integers x, y, and z with . Schur's theorem ensures that S(c) is well-defined for every positive integer c. The numbers of the form S(c) are called Schur's numbers. Folkman's theorem generalizes Schur's theorem by stating that there exist arbitrarily large sets of integers, all of whose nonempty sums belong to the same part. Using this definition, the only known Schur numbers are S(n) 2, 5, 14, 45, and 161 () The proof that was announced in 2017 and required 2 petabytes of space. Combinatorics In combinatorics, Schur's theorem tells the number of ways for expressing a given number as a (non-negative, integer) linear combination of a fixed set of relatively prime numbers. In particular, if is a set of integers such that , the number of different multiples of non-negative integer numbers such that when goes to infinity is: As a result, for every set of relatively prime numbers there exists a value of such that every larger number is representable as a linear combination of in at least one way. This consequence of the theorem can be recast in a familiar context considering the problem of changing an amount using a set of coins. If the denominations of the coins are relatively prime numbers (such as 2 and 5) then any sufficiently large amount can be changed using only these coins. (See Coin problem.) Differential geometry In differential geometry, Schur's theorem compares the distance between the endpoints of a space curve to the distance between the endpoints of a corresponding plane curve of less curvature. Suppose is a plane curve with curvature which makes a convex curve when closed by the chord connecting its endpoints, and is a curve of the same length with curvature . Let denote the distance between the endpoints of and denote the distance between the endpoints of . If then . Schur's theorem is usually stated for curves, but John M. Sullivan has observed that Schur's theorem applies to curves of finite total curvature (the statement is slightly different). Linear algebra In linear algebra, Schur’s theorem is referred to as either the triangularization of a square matrix with complex entries, or of a square matrix with real entries and real eigenvalues. Functional analysis In functional analysis and the study of Banach spaces, Schur's theorem, due to I. Schur, often refers to Schur's property, that for certain spaces, weak convergence implies convergence in the norm. Number theory In number theory, Issai Schur showed in 1912 that for every nonconstant polynomial p(x) with integer coefficients, if S is the set of all nonzero values , then the set of primes that divide some member of S is infinite. See also Schur's lemma (from Riemannian geometry) References Herbert S. Wilf (1994). generatingfunctionology. Academic Press. Shiing-Shen Chern (1967). Curves and Surfaces in Euclidean Space. In Studies in Global Geometry and Analysis. Prentice-Hall. Issai Schur (1912). Über die Existenz unendlich vieler Primzahlen in einigen speziellen arithmetischen Progressionen, Sitzungsberichte der Berliner Math. Further reading Dany Breslauer and Devdatt P. Dubhashi (1995). Combinatorics for Computer Scientists John M. Sullivan (2006). Curves of Finite Total Curvature. arXiv. Theorems in discrete mathematics Ramsey theory Additive combinatorics Theorems in combinatorics Theorems in differential geometry Theorems in linear algebra Theorems in functional analysis Computer-assisted proofs
Schur's theorem
[ "Mathematics" ]
912
[ "Theorems in differential geometry", "Theorems in linear algebra", "Theorems in mathematical analysis", "Theorems in combinatorics", "Discrete mathematics", "Mathematical theorems", "Theorems in algebra", "Additive combinatorics", "Computer-assisted proofs", "Theorems in discrete mathematics", "...
1,522,302
https://en.wikipedia.org/wiki/Alpha%20Gruis
Alpha Gruis is the brightest star in the southern constellation of Grus. It is officially named Alnair; Alpha Gruis is the star's Bayer designation, which is Latinized from α Gruis and abbreviated α Gru. With an magnitude of 1.74, it is one of the brightest stars in the sky and one of the fifty-eight stars selected for celestial navigation. Alpha Gruis is a single, B-type main-sequence star located at a distance of . Nomenclature α Gruis (Latinised to Alpha Gruis) is the star's Bayer designation. (Its first depiction in a celestial atlas was in Johann Bayer's Uranometria of 1603.) It bore the traditional name Alnair or Al Nair (sometimes Al Na'ir in lists of stars used by navigators), from the Arabic al-nayyir "the bright one", itself derived from its Arabic name, al-nayyir min dhanab al-ḥūt (al-janūbiyy), "the bright one from the (southern) fish's tail" (see Aldhanab). Confusingly, Alnair was also given as the proper name for Zeta Centauri in an astronomical ephemerides in the middle of the 20th century. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Alnair for this star on 21 August 2016 and it is now so entered in the IAU Catalog of Star Names. Along with Beta Gruis, Delta Gruis, Theta Gruis, Iota Gruis, and Lambda Gruis, Alpha Gruis belonged to Piscis Austrinus in traditional Arabic astronomy. In Chinese, (), meaning Crane, refers to an asterism consisting of Alpha Gruis, Beta Gruis, Delta2 Gruis, Epsilon Gruis, Zeta Gruis, Eta Gruis, Iota Gruis, Theta Gruis, Mu1 Gruis and Delta Tucanae. Consequently, Alpha Gruis itself is known as (, ). The Chinese name gave rise to another English name, Ke. Properties Alpha Gruis has a stellar classification of B6 V, although some sources give it a classification of B7 IV. The first classification indicates that this is a B-type star on the main sequence of stars that are generating energy through the thermonuclear fusion of hydrogen at the core. However, a luminosity class of 'IV' would suggest that this is a subgiant star; meaning the supply of hydrogen at its core is becoming exhausted and the star has started the process of evolving away from the main sequence. It has no known companions. The measured angular diameter of this star, after correcting for limb darkening, is . At the Alnair's distance from Earth of from Earth, this yields a physical size of times the radius of the Sun. It is rotating rapidly, with a projected rotational velocity of about 215 km/s providing a lower bound for the rate of azimuthal rotation along the equator. This star has around four times the Sun's mass and is radiating roughly 520 times the luminosity of the Sun. The effective temperature of Alnair's outer envelope is 14,245 K, giving it the blue-white hue characteristic of B-type stars. The abundance of elements other than hydrogen and helium, what astronomers term the metallicity, is about 74% of the abundance in the Sun. Based on the estimated age and motion, it is a member of the AB Doradus moving group that share a common motion through space. This group has an age of about 70 million years, which is consistent with α Gruis's 100-million-year estimated age (allowing for a margin of error). The space velocity components of this star in the Galactic coordinate system are [U, V, W] = , , . Notes References External links B-type main-sequence stars Grus (constellation) Gruis, Alpha 8425 CD-47 14063 0848.2 209952 109268 Alnair
Alpha Gruis
[ "Astronomy" ]
859
[ "Grus (constellation)", "Constellations" ]
1,522,333
https://en.wikipedia.org/wiki/Gamma%20Velorum
Gamma Velorum is a quadruple star system in the constellation Vela. This name is the Bayer designation for the star, which is Latinised from γ Velorum and abbreviated γ Vel. At a combined magnitude of +1.72, it is one of the brightest stars in the night sky, and contains by far the closest and brightest Wolf–Rayet star. It has the traditional name Suhail al Muhlif and the modern name Regor , but neither is approved by the International Astronomical Union, making it the brightest star by apparent magnitude without an IAU approved name. The γ Velorum system includes a pair of stars separated by 41″, each of which is also a spectroscopic binary system. γ2 Velorum, the brighter of the visible pair, contains the Wolf–Rayet star and a blue supergiant, while γ1 Velorum contains a blue giant and an unseen companion. Distance Gamma Velorum is close enough to have accurate parallax measurements as well as distance estimates by more indirect means. The Hipparcos parallax for γ2 implies a distance of 342 parsecs (pc). A dynamical parallax derived from calculations of the orbital parameters gives a value of 336 pc, similar to spectrophotometric derivations. A VLTI-based interferometry measurement of the distance gives a slightly larger value of 368 ± 51 pc. All these distances are somewhat less than the commonly assumed distance of 450 pc for the Vela OB2 association which is the closest grouping of young massive stars. Stellar system The Gamma Velorum system is composed of at least four stars. The brightest member, γ2 Velorum or γ Velorum A, is a spectroscopic binary composed of a blue supergiant of spectral class O7.5 (), and a massive Wolf–Rayet star (, originally ). The binary has an orbital period of 78.5 days and separation varying from 0.8 to 1.6 astronomical units. Both the Wolf–Rayet star and the blue supergiant are likely to end their lives as Type Ib supernovae; they are among the nearest supernova candidates to the Sun. The Wolf–Rayet star has traditionally been regarded as the primary since its emission lines dominate the spectrum, but the O star is visually brighter and also more luminous. For clarity, the components are now often referred to as WR and O. The bright (apparent magnitude +4.2) γ1 Velorum or γ Velorum B, is a spectroscopic binary with a period of 1.48 days. Only the primary is detected and it is a blue-white giant. It is separated from the Wolf–Rayet binary by 41.2″, easily resolved with binoculars. The pair are too close to be separated without optical assistance, and they appear to the naked eye as a single star of apparent magnitude 1.72 (at the average brightness of γ2 of 1.83). Gamma Velorum has several fainter companions that share a common motion and are likely to be members of the Vela OB2 association. The magnitude +7.3 CD-46 3848 is a white F0 star at is 62.3 arcseconds from the A component. At 93.5 arcseconds is another binary star, an F0 star of magnitude +9.2. Gamma Velorum is associated with several hundred pre-main-sequence stars within less than a degree. The ages of these stars would be at least 5 million years. Etymology The Arabic name is al Suhail al Muḥlīf. al Muhlif refers to the oath-taker, and al Suhail is originally derived from a word meaning the plain. Suhail is used for at least three other stars: Canopus, λ Velorum (al Suhail al Wazn) and ζ Puppis (Suhail Hadar). Suhail is also a common Arabic male first name. In Chinese, (), meaning Celestial Earth God's Temple, refers to an asterism consisting of γ2 Velorum, δ Velorum, κ Velorum and b Velorum. Consequently, γ2 Velorum itself is known as (), "the First Star of Celestial Earth God's Temple". The name Regor ("Roger" spelled in reverse) was invented as a practical joke by the Apollo 1 astronaut Gus Grissom for his fellow astronaut Roger Chaffee. Due to the exotic nature of its spectrum (bright emission lines in lieu of dark absorption lines) it is also dubbed the Spectral Gem of Southern Skies. See also Gamma Cassiopeiae, informally named Navi for astronaut Virgil Ivan "Gus" Grissom Iota Ursae Majoris, informally named Dnoces for astronaut Ed White References O-type giants B-type giants Wolf–Rayet stars Spectroscopic binaries 6 Gum Nebula Vela (constellation) Velorum, Gamma 3207 Durchmusterung objects 068273 039953 Regor TIC objects Southern pole stars
Gamma Velorum
[ "Astronomy" ]
1,065
[ "Vela (constellation)", "Constellations" ]
1,522,349
https://en.wikipedia.org/wiki/Delta%20Canis%20Majoris
Delta Canis Majoris (Latinised from δ Canis Majoris, abbreviated Delta CMa, δ CMa), officially named Wezen , is a star in the constellation of Canis Major. It is a yellow-white F-type supergiant with an apparent magnitude of +1.83. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. Observation Delta Canis Majoris is the third-brightest star in the constellation after Sirius and ε Canis Majoris (Adhara), with an apparent magnitude of +1.83, and is white or yellow-white in colour. Lying about 10 degrees south southeast of Sirius, it only rises to about 11 degrees above the horizon at the latitude of the United Kingdom. The open cluster NGC 2354 is located only 1.3 degrees east of Delta Canis Majoris. As with the rest of Canis Major, Delta Canis Majoris is most visible in winter skies in the northern hemisphere, and summer skies in the southern. In Bayer's Uranometria, it is in the Great Dog's hind quarter. History and naming δ Canis Majoris (Latinised to Delta Canis Majoris) is the star's Bayer designation. The traditional name, Wezen (alternatively Wesen, or Wezea), is derived from the medieval Arabic وزن al-wazn, which means 'weight' in modern Arabic. The name was for one of a pair of stars, the other being Hadar, which has now come to refer to Beta Centauri. It is unclear whether the pair of stars was originally Alpha and Beta Centauri or Alpha and Beta Columbae. In any case, the name was somehow applied to both Delta Canis Majoris and Beta Columbae. Richard Hinckley Allen muses that the name alludes to the difficulty the star has rising above the horizon in the northern hemisphere. Astronomer Jim Kaler has noted the aptness of the traditional name given the star's massive nature. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Wezen for this star. In Chinese, (), meaning Bow and Arrow, refers to an asterism consisting of δ Canis Majoris, ε Canis Majoris, η Canis Majoris, κ Canis Majoris, ο Puppis, π Puppis, χ Puppis, c Puppis and k Puppis. Consequently, δ Canis Majoris itself is known as (, .) In the catalogue of stars in the Calendarium of Al Achsasi Al Mouakket, this star was designated Thalath al Adzari (تالت ألعذاري - taalit al-aðārii), which was translated into Latin as Tertia Virginum, meaning the third virgin. This star, along with ε Canis Majoris (Adhara), η Canis Majoris (Aludra) and ο2 Canis Majoris (Thanih al Adzari), were Al ʽAdhārā (ألعذاري), the Virgins. Physical properties Delta Canis Majoris is a supergiant of class F8. Its surface temperature is around 5,818 K, and it is 14 to 15 times more massive than the Sun. Its absolute magnitude is −6.77, and it lies around 1,600 light-years away. It is rotating at a speed of around 28 km/s, and hence may take a year to rotate fully. Only around 10 million years old, Delta Canis Majoris has stopped fusing hydrogen in its core. Its outer envelope is beginning to expand and cool, and in the next 100,000 years it will become a red supergiant as its core fuses heavier and heavier elements. Once it has an iron core, it will collapse and explode as a supernova. The angular diameter of Wezen has been measured using interferometry, giving a limb-darkened diameter of . At the distance measured by the Hipparcos spacecraft of , it corresponds to a physical radius of times the radius of the Sun. However, the 2007 Hipparcos reduction refined the distance to about (493 parsecs), corresponding to a smaller size of using the angular diameter. Following the new distance, a 2017 study published a radius of based on the stellar temperature and luminosity. If Delta Canis Majoris were as close to Earth as Sirius is, it would be as bright as a half-full moon. Modern legacy Delta Canis Majoris appears on the flag of Brazil, symbolising the state of Roraima. References F-type supergiants Canis Major Canis Majoris, Delta Durchmusterung objects Canis Majoris, 25 054605 034444 2693 TIC objects Wezen
Delta Canis Majoris
[ "Astronomy" ]
1,054
[ "Canis Major", "Constellations" ]
1,522,355
https://en.wikipedia.org/wiki/Alkaid
Alkaid , also called Eta Ursae Majoris (Latinised from η Ursae Majoris, abbreviated Eta UMa, η UMa), is a star in the constellation of Ursa Major. It is the easternmost star in the Big Dipper (or Plough) asterism. However, unlike most stars of the Big Dipper, it is not a member of the Ursa Major moving group. With an apparent visual magnitude of +1.86, it is the third-brightest star in the constellation and one of the brightest stars in the night sky. Physical properties Alkaid is a 10-million-year-old B-type main sequence star with a stellar classification of B3 V. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. It has six times the mass; 3.4 times the radius, and is radiating around 594 times as much energy as the Sun. Its outer atmosphere has an effective temperature of about 15,540 K, giving it the blue-white hue of a B-type star. This star is an X-ray emitter with a luminosity of . Eta Ursae Majoris was listed as a standard star for the spectral type B3 V. It has broadened absorption lines due to its rapid rotation, which is common in stars of this type. However, the lines are very slightly distorted and variable, which may be caused by some emission from a weak disk of material produced by the rapid rotation. Alkaid is a relatively nearby and bright star and has been examined closely, but no exoplanets or companion stars have been discovered. Nomenclature η Ursae Majoris (Latinised to Eta Ursae Majoris) is the star's Bayer designation. The International Astronomical Union has formally chosen the proper name Alkaid for this star. It bore the traditional names Alkaid (or Elkeid from the Arabic القايد القائد) and Benetnasch . Alkaid derives from the Arabic phrase meaning "The leader of the daughters of the bier" ( ). The daughters of the bier, i.e. the mourning maidens, are the three stars of the handle of the Big Dipper, Alkaid, Mizar, and Alioth; while the four stars of the bowl, Megrez, Phecda, Merak, and Dubhe, are the bier. It is known as Běidǒuqī (北斗七 - the Seventh Star of the Northern Dipper) or Yáoguāng (瑤光 - the Star of Twinkling Brilliance) in Chinese. The Hindus knew this star as Marīci, one of the Seven Rishis. In Japan and Korea, Alkaid is known as Hagunsei and Mukokseong respectively ("the military breaking star" or "most corner star"). Both meanings come from ancient China's influence in both countries. In culture USS Alkaid (AK-114) was a United States Navy Crater class cargo ship named after the star. Alkaid is one of the Behenian fixed stars, used in Alchemy. The fossil starfish Alkaidia is named after Alkaid. References External links Alkaid at Jim Kaler's Stars website Ursae Majoris, Eta Big Dipper B-type main-sequence stars Ursa Major Alkaid Ursae Majoris, 85 5191 120315 067301 BD+50 2027
Alkaid
[ "Astronomy" ]
739
[ "Ursa Major", "Constellations" ]
1,522,361
https://en.wikipedia.org/wiki/Theta%20Scorpii
Theta Scorpii (θ Scorpii, abbreviated Theta Sco, θ Sco) is a binary star in the southern zodiac constellation of Scorpius. The apparent visual magnitude of this star is +1.87, making it readily visible to the naked eye and one of the brightest stars in the night sky. It is sufficiently near that the distance can be measured directly using the parallax technique and such measurements obtained during the Hipparcos mission yield an estimate of approximately from the Sun. The two components are designated θ Scorpii A (officially named Sargas , the traditional name for the system) and B. Nomenclature θ Scorpii (Latinised to Theta Scorpii) is the system's Bayer designation. The designations of the two components as Theta Scorpii A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU). It bore the traditional name Sargas, of Sumerian origin. Another possible origin is Persian for Arrow Head سر گز. The name 'Sar Gaz' is used in Iran as a star name, and was used for timing irrigation water shares. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Sargas for the star θ Scorpii A on 21 August 2016 and it is now so included in the List of IAU-approved Star Names. In Chinese, (), meaning Tail, refers to an asterism consisting of Theta Scorpii, Epsilon Scorpii, Zeta1 Scorpii and Zeta2 Scorpii, Eta Scorpii, Iota1 Scorpii and Iota2 Scorpii, Kappa Scorpii, Lambda Scorpii, Mu1 Scorpii and Upsilon Scorpii. Consequently, the Chinese name for Theta Scorpii itself is (), "the Fifth Star of Tail". Properties The primary (θ Scorpii A) is an evolved bright giant star with a stellar classification of F0 II. With a mass 3.10 times that of the Sun, it is radiating 1,400 times as much luminosity as the Sun from its outer envelope at an effective temperature of 6,294 K, giving it the yellow-white-hued glow of an F-type star. This star is rotating rapidly, giving it an oblate shape with an equatorial radius 33% larger than the polar radius. The equatorial radius is about while the polar radius is only about . This rapid rotation suggests that it formed via the merger of a binary star system. A magnitude 5.36 companion has been reported at an angular separation of 6.470 arcseconds, but subsequent observers have failed to detect it, so it probably does not exist. However, a secondary, designated θ Scorpii B, has been detected at an angular separation of 0.538 arcseconds in 1991 by the Hipparcos satellite. Modern legacy Theta Scorpii appears on the flag of Brazil, symbolising the state of Alagoas. References External links Theta Sco Scorpii, Theta Binary stars Scorpius F-type bright giants Sargas 159532 086228 6553 Scorpii, 160 Durchmusterung objects
Theta Scorpii
[ "Astronomy" ]
735
[ "Scorpius", "Constellations" ]
1,522,372
https://en.wikipedia.org/wiki/Beta%20Aurigae
Beta Aurigae (Latinized from β Aurigae, abbreviated Beta Aur, β Aur), officially named Menkalinan , is a binary star system in the northern constellation of Auriga. The combined apparent visual magnitude of the system is 1.9, making it the second-brightest member of the constellation after Capella. Using the parallax measurements made during the Hipparcos mission, the distance to this star system can be estimated as , give or take a half-light-year margin of error. Along their respective orbits around the Milky Way, Beta Aurigae and the Sun are closing in on each other, so that in around one million years it will become the brightest star in the night sky. Nomenclature β Aurigae is the star system's Bayer designation. The traditional name Menkalinan is derived from the Arabic منكب ذي العنان mankib ðī-l-‘inān "shoulder of the rein-holder". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Menkalinan for this star. It is known as 五車三 (the Third Star of the Five Chariots) in traditional Chinese astronomy. Properties Beta Aurigae is a binary star system, but it appears as a single star in the night sky. The two stars are metallic-lined subgiant stars belonging to the A-type stellar classification; they have roughly the same mass and radius. A-type entities are hot stars that release a blue-white hued light; these two stars burn brighter and with more heat than the Sun, which is a G2-type main sequence star. The pair constitute an eclipsing spectroscopic binary; the combined apparent magnitude varies over a period of 3.96 days between +1.89 and +1.94, as every 47.5 hours one of the stars partially eclipses the other from Earth's perspective. The two stars are designated Aa and Ab in modern catalogues, but have also been referred to as components 1 and 2 or A and B. There is an 11th magnitude optical companion with a separation of as of 2011, but increasing. It is also an A-class subgiant, but is an unrelated background star. At an angular separation of along a position angle of 155° is a companion star that is 8.5 magnitudes fainter than the primary. It may be the source of the X-ray emission from the vicinity. The Beta Aurigae system is believed to be a stream member of the Ursa Major Moving Group. See also Algol Capella References External links Menkalinan Image Beta Aurigae CCDM J05596+4457 Catalog Smithsonian/NASA Astrophysics Data System Menkalinan - spectroscopic binary star Menkalinan - Beta Aurigae, the spectroscopic eclipsing binary star M-type main-sequence stars A-type subgiants Am stars Algol variables Triple star systems Ursa Major moving group Menkalinan Auriga Aurigae, Beta BD+44 1328 Aurigae, 34 040183 028360 2088
Beta Aurigae
[ "Astronomy" ]
706
[ "Auriga", "Constellations" ]
1,522,373
https://en.wikipedia.org/wiki/Carroll%27s%20paradox
In physics, Carroll's paradox arises when considering the motion of a falling rigid rod that is specially constrained. Considered one way, the angular momentum stays constant; considered in a different way, it changes. It is named after Michael M. Carroll who first published it in 1984. Explanation Consider two concentric circles of radius and as might be drawn on the face of a wall clock. Suppose a uniform rigid heavy rod of length is somehow constrained between these two circles so that one end of the rod remains on the inner circle and the other remains on the outer circle. Motion of the rod along these circles, acting as guides, is frictionless. The rod is held in the three o'clock position so that it is horizontal, then released. Now consider the angular momentum about the centre of the rod: After release, the rod falls. Being constrained, it must rotate as it moves. When it gets to a vertical six o'clock position, it has lost potential energy and, because the motion is frictionless, will have gained kinetic energy. It therefore possesses angular momentum. The reaction force on the rod from either circular guide is frictionless, so it must be directed along the rod; there can be no component of the reaction force perpendicular to the rod. Taking moments about the center of the rod, there can be no moment acting on the rod, so its angular momentum remains constant. Because the rod starts with zero angular momentum, it must continue to have zero angular momentum for all time. An apparent resolution of this paradox is that the physical situation cannot occur. To maintain the rod in a radial position the circles have to exert an infinite force. In real life it would not be possible to construct guides that do not exert a significant reaction force perpendicular to the rod. Victor Namias, however, disputed that infinite forces occur, and argued that a finitely thick rod experiences torque about its center of mass even in the limit as it approaches zero width. References Mechanics Physical paradoxes
Carroll's paradox
[ "Physics", "Engineering" ]
401
[ "Mechanics", "Mechanical engineering" ]
1,522,377
https://en.wikipedia.org/wiki/Gamma%20Geminorum
Gamma Geminorum (γ Geminorum, abbreviated Gamma Gem, γ Gem), formally named Alhena , is the third-brightest object in the constellation of Gemini. It has an apparent visual magnitude of 1.9, making it easily visible to the naked eye even in urban regions. Based upon parallax measurements with the Hipparcos satellite, it is located at a distance of roughly from the Sun. Properties Alhena is an evolving star that is exhausting the supply of hydrogen at its core and has entered the subgiant stage. The spectrum matches a stellar classification of A0 IV. Compared to the Sun it has 2.8 times the mass and 4.9 times the radius. It is radiating around 123 times the luminosity of the Sun from its outer envelope at an effective temperature of 9,260 K. This gives it a white hue typical of an A-class star. Alhena is a spectroscopic binary system with a period of 12.6 years (4,614.51 days) in a highly eccentric Keplerian orbit. The secondary, with 1.07 times the mass of the Sun, is likely a G-type main-sequence star. Etymology γ Geminorum (Latinised to Gamma Geminorum) is the star's Bayer designation. The traditional name Alhena is derived from the Arabic الهنعة Al Han'ah, 'the brand' (on the neck of the camel), whilst the alternate name Almeisan is from the Arabic المیسان Al Maisan, 'the shining one.' Al Hanʽah was the name of star association consisting of this star, along with Mu Geminorum (Tejat Posterior), Nu Geminorum, Eta Geminorum (Tejat Prior) and Xi Geminorum (Alzirr). They also were associated in Al Nuḥātai, the dual form of Al Nuḥāt, 'a Camel's Hump'. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Alhena for this star. In the catalogue of stars in the Calendarium of Al Achsasi Al Mouakket, this star was designated Nir al Henat, which was translated into Latin as Prima του al Henat, meaning 'the brightest of Al Henat'. In Chinese, (), meaning Well (asterism), refers to an asterism consisting of γ Geminorum, ε Geminorum, ζ Geminorum, λ Geminorum, μ Geminorum, ν Geminorum, ξ Geminorum and 36 Geminorum. Consequently, γ Geminorum itself is known as (, .) Conjunctions Gamma Geminorum is 6° south of the ecliptic, far enough so that the Moon never occults it. Similarly, planets in conjunction with this star almost always pass several degrees to the north, but Venus will have a series of close conjunctions with Gamma Geminorum starting in August 2143, and continuing every eight years after over the remainder of that century. The Sun passes Gamma Geminorum on or around June 30 every year. In culture Alhena was the name of a Dutch ship that rescued many people from an Italian cruise liner, the SS Principessa Mafalda, in October 1927. In addition, the American attack cargo ship was named after the star. References Geminorum, Gamma Geminorum, 24 Gemini (constellation) A-type subgiants Spectroscopic binaries Alhena 2421 047105 031681 Durchmusterung objects
Gamma Geminorum
[ "Astronomy" ]
794
[ "Gemini (constellation)", "Constellations" ]
1,522,379
https://en.wikipedia.org/wiki/Alpha%20Pavonis
Alpha Pavonis (α Pavonis, abbreviated Alpha Pav, α Pav), formally named Peacock , is a binary star in the southern constellation of Pavo, near the border with the constellation Telescopium. Nomenclature α Pavonis (Latinised to Alpha Pavonis) is the star's Bayer designation. The historical name Peacock was assigned by His Majesty's Nautical Almanac Office in the late 1930s during the creation of the Air Almanac, a navigational almanac for the Royal Air Force. Of the fifty-seven stars included in the new almanac, two had no classical names: Alpha Pavonis and Epsilon Carinae. The RAF insisted that all of the stars must have names, so new names were invented. Alpha Pavonis was named "Peacock" ('pavo' is Latin for 'peacock') whilst Epsilon Carinae was called "Avior". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Peacock for this star and Avior for Epsilon Carinae. In Chinese caused by adaptation of the European southern hemisphere constellations into the Chinese system, (), meaning Peacock, refers to an asterism consisting of α Pavonis, η Pavonis, π Pavonis, ν Pavonis, λ Pavonis, κ Pavonis, δ Pavonis, β Pavonis, ζ Pavonis, ε Pavonis and γ Pavonis. Consequently, α Pavonis itself is known as (, .) Properties At an apparent magnitude of 1.94, this is the brightest star in Pavo. Based upon parallax measurements, this star is about distant from the Earth. It has an estimated six times the Sun's mass and 6 times the Sun's radius, but 2,200 times the luminosity of the Sun. The effective temperature of the photosphere is 17,700 K, which gives the star a blue-white hue. It has a stellar classification of B3 V, although older studies have often given it a subgiant luminosity class. It is classified as B2.5 IV in the Bright Star Catalogue. Stars with the mass of Alpha Pavonis are believed not to have a convection zone near their surface. Hence the material found in the outer atmosphere is not processed by the nuclear fusion occurring at the core. This means that the surface abundance of elements should be representative of the material out of which it originally formed. In particular, the surface abundance of deuterium should not change during the star's main sequence lifetime. The measured ratio of deuterium to hydrogen in this star amounts to less than , which suggests this star may have formed in a region with an unusually low abundance of deuterium, or else the deuterium was consumed by some means. A possible scenario for the latter is that the deuterium was burned through while Alpha Pavonis was a pre-main-sequence star. The system is likely to be a member of the Tucana-Horologium association that share a common motion through space. The estimated age of this association is 45 million years. α Pavonis star has a peculiar velocity of relative to its neighbors. Companions Three stars have been listed as visual companions to α Pavonis: two ninth magnitude stars at about four arc minutes; and a 12th magnitude F5 main sequence star at about one arc minute. The two ninth magnitude companions are only 17 arc seconds from each other. α Pavonis A is a spectroscopic binary consisting of a pair of stars that orbit around each other with a period of 11.753 days. However, in part because the two stars have not been individually resolved, little is known about the companion except that it has a mass of at least . One attempt to model a composite spectrum estimated components with spectral types of B0.5 and B2, and a brightness difference between the two components of 1.3 magnitudes. References External links Peacock - Jim Kaler's Stars Pavonis, Alpha B-type subgiants 193924 100751 7790 Pavo (constellation) Spectroscopic binaries Peacock Durchmusterung objects
Alpha Pavonis
[ "Astronomy" ]
906
[ "Constellations", "Pavo (constellation)" ]
6,979,594
https://en.wikipedia.org/wiki/Scalable%20TCP
Type of Transmission Control Protocol which is designed to provide much higher throughput and scalability. Standard TCP recommendations as per RFC 2581 and RFC 5681 call for congestion window to be halved for each packet lost. Effectively, this process keeps halving the throughput until packet loss stops. Once the packet loss subsides, slow start kicks in to ramp the speed back up. When the window sizes are small, say 1 Mbit/s @ 200 ms round trip time and the window is about 20 packets, this recovery time is quite fast—on the order of a few seconds. But as transfer speeds approach 1 Gbit/s, the recovery time becomes half an hour and for 10 Gbit/s it's over 4 hours. Procedure Scalable TCP modifies the congestion control algorithm. Instead of halving the congestion window size, each packet loss decreases the congestion window by a small fraction (a factor of 1/8 instead of Standard TCP's 1/2) until packet loss stops. When packet loss stops, the rate is ramped up at a slow fixed rate (one packet is added for every one hundred successful acknowledgements) instead of the Standard TCP rate that's the inverse of the congestion window size (thus very large windows take a long time to recover). This helps reduce the recovery time on 10 Gbit/s links from 4+ hours (using Standard TCP) to less than 15 seconds when the round trip time is 200 milliseconds. See also UDP-based Data Transfer Protocol References External links Scalable TCP Details CERN Paper About Scalable TCP Internet protocols TCP congestion control Transport layer protocols
Scalable TCP
[ "Technology" ]
343
[ "Computing stubs", "Computer science", "Computer science stubs" ]
6,979,989
https://en.wikipedia.org/wiki/World%20Design%20Organization
The World Design Organization (WDO) was founded in 1957 from a group of international organizations focused on industrial design. Formerly known as the International Council of Societies of Industrial Design, the WDO is a worldwide society that promotes better design around the world. Today, the WDO includes over 170 member organizations in more than 40 nations, representing an estimated 150,000 designers. The primary aim of the association is to advance the discipline of industrial design at an international level. To do this, WDO undertakes a number of initiatives of global appeal to support the effectiveness of industrial design in an attempt to address the needs and aspirations of people around the world, to improve the quality of life, as well as help to improve the economy of nations throughout the world. History Jacques Viénot first presented the idea to form a society to represent the industrial designers internationally at the Institut d’Esthetique Industrielle's international congress in 1953. The International Council of Societies of Industrial Designers was formally founded at a meeting in London on June 29, 1957. The name of Icsid demonstrates the spirit which is to protect the interests of practicing designers and to ensure global standards of design. The individuals first elected officials to the Executive Board therefore did not act upon personal conviction, but represented the voice of society members and the international design community. The organization then officially registered in Paris and set up their headquarters there. Icsid's early goals were to help public awareness of industrial designers, to raise the standard of design by setting standards for training and education, and to encourage cooperation between industrial designers worldwide. To do this, in 1959 Icsid held the first Congress and General Assembly in Stockholm, Sweden. At this first Congress the Icsid Constitution was officially adopted, along with the first definition of industrial design which may be found on their website (please see external references). During this Congress, Icsid's official name was changed from the International Council of Societies of Industrial Designers to the International Council of Societies of Industrial Design to reflect that the organization would involve itself beyond matters of professional practice. Throughout Icsid had continued to grow and now has members from all over the world in both capitalist and non-capitalist countries. Icsid has now hosted the Congress in places such as Venice, Paris, Vienna, Montreal, Slovenia, Glasgow, Taipei, Toronto, Sydney, Kyoto and London. In 1963, Icsid was granted special status with UNESCO, with whom Icsid continues to work on many projects, using design for the betterment of the human condition. As their humanitarian interests grew, Icsid decided to create a new type of conference that would join industrial designers in a host country to study a problem of both regional and international significance. This new conference held in Minsk in 1971, became the first Icsid Interdesign seminar. These seminars provided opportunities for professional development of mid-career practicing designers, and to allow them to focus their abilities on resolving issues of international significance. This first Interdesign conference and the ones that followed, consolidated Icsid's position as a driving force of international collaboration. In 1974, the Icsid Secretariat moved from Paris, France, to Brussels, Belgium, moving onto Helsinki, Finland, and in 2005, it settled to Montreal, Quebec, Canada, where it currently resides. In the 1980s, collaboration became even more important so a joint Icsid/Icograda/IFI Congress was held in Helsinki. The impetus for this joint conference was a direct recommendation made by Icsid members to explore closer ties with other world design organizations. At their General Assemblies, all participants unanimously approved a directive to investigate options for a closer working relationship in the future. These organizations then joined with UNESCO to bring together doctors, industrial and graphic designers, and assistants to develop basic furniture for rural health centers, packaging, transport, refrigeration, and injection of vaccines and the design of data collecting devices for field use. In 2003, Icsid and Icograda ratified an agreement between both organizations during their respective General Assemblies to form the International Design Alliance, a multidisciplinary partnership that supports design. In 2008, the IDA partners welcomed a third member, IFI (International Federation of Interior Architects/Designers). Together in 2011, all three partners held a historical joint Congress in Taipei, Taiwan called the IDA Congress. The alliance was terminated in November 2013. In 2017, in January the Icsid officially became the World Design Organization (WDO). References Publications External links Icsid website Icsid Archive at the University of Brighton Design Archives World Design Capital website World Design Impact Prize Industrial design Design institutions International trade associations Organizations established in 1957 International organizations based in Canada
World Design Organization
[ "Engineering" ]
955
[ "Industrial design", "Design engineering", "Design", "Design institutions" ]
6,980,269
https://en.wikipedia.org/wiki/Wave-making%20resistance
Wave-making resistance is a form of drag that affects surface watercraft, such as boats and ships, and reflects the energy required to push the water out of the way of the hull. This energy goes into creating the wave. Physics For small displacement hulls, such as sailboats or rowboats, wave-making resistance is the major source of the marine vessel drag. A salient property of water waves is dispersiveness; i.e., the greater the wavelength, the faster it moves. Waves generated by a ship are affected by her geometry and speed, and most of the energy given by the ship for making waves is transferred to water through the bow and stern parts. Simply speaking, these two wave systems, i.e., bow and stern waves, interact with each other, and the resulting waves are responsible for the resistance. If the resulting wave is large, it carries much energy away from the ship, delivering it to the shore or wherever else the wave ends up or just dissipating it in the water, and that energy must be supplied by the ship's propulsion (or momentum), so that the ship experiences it as drag. Conversely, if the resulting wave is small, the drag experienced is small. The amount and direction (additive or subtractive) of the interference depends upon the phase difference between the bow and stern waves (which have the same wavelength and phase speed), and that is a function of the length of the ship at the waterline. For a given ship speed, the phase difference between the bow wave and stern wave is proportional to the length of the ship at the waterline. For example, if the ship takes three seconds to travel its own length, then at some point the ship passes, a stern wave is initiated three seconds after a bow wave, which implies a specific phase difference between those two waves. Thus, the waterline length of the ship directly affects the magnitude of the wave-making resistance. For a given waterline length, the phase difference depends upon the phase speed and wavelength of the waves, and those depend directly upon the speed of the ship. For a deepwater wave, the phase speed is the same as the propagation speed and is proportional to the square root of the wavelength. That wavelength is dependent upon the speed of the ship. Thus, the magnitude of the wave-making resistance is a function of the speed of the ship in relation to its length at the waterline. A simple way of considering wave-making resistance is to look at the hull in relation to bow and stern waves. If the length of a ship is half the length of the waves generated, the resulting wave will be very small due to cancellation, and if the length is the same as the wavelength, the wave will be large due to enhancement. The phase speed of waves is given by the following formula: where is the length of the wave and the gravitational acceleration. Substituting in the appropriate value for yields the equation: or, in metric units: These values, 1.34, 2.5 and very easy 6, are often used in the hull speed rule of thumb used to compare potential speeds of displacement hulls, and this relationship is also fundamental to the Froude number, used in the comparison of different scales of watercraft. When the vessel exceeds a "speed–length ratio" (speed in knots divided by square root of length in feet) of 0.94, it starts to outrun most of its bow wave, the hull actually settles slightly in the water as it is now only supported by two wave peaks. As the vessel exceeds a speed-length ratio of 1.34, the wavelength is now longer than the hull, and the stern is no longer supported by the wake, causing the stern to squat, and the bow to rise. The hull is now starting to climb its own bow wave, and resistance begins to increase at a very high rate. While it is possible to drive a displacement hull faster than a speed-length ratio of 1.34, it is prohibitively expensive to do so. Most large vessels operate at speed-length ratios well below that level, at speed-length ratios of under 1.0. Ways of reducing wave-making resistance Since wave-making resistance is based on the energy required to push the water out of the way of the hull, there are a number of ways that this can be minimized. Reduced displacement Reducing the displacement of the craft, by eliminating excess weight, is the most straightforward way to reduce the wave making drag. Another way is to shape the hull so as to generate lift as it moves through the water. Semi-displacement hulls and planing hulls do this, and they are able to break through the hull speed barrier and transition into a realm where drag increases at a much lower rate. The disadvantage of this is that planing is only practical on smaller vessels, with high power-to-weight ratios, such as motorboats. It is not a practical solution for a large vessel such as a supertanker. Fine entry A hull with a blunt bow has to push the water away very quickly to pass through, and this high acceleration requires large amounts of energy. By using a fine bow, with a sharper angle that pushes the water out of the way more gradually, the amount of energy required to displace the water will be less. A modern variation is the wave-piercing design. The total amount of water to be displaced by a moving hull, and thus causing wave making drag, is the cross sectional area of the hull times distance the hull travels, and will not remain the same when prismatic coefficient is increased for the same lwl and same displacement and same speed. Bulbous bow A special type of bow, called a bulbous bow, is often used on large power vessels to reduce wave-making drag. The bulb alters the waves generated by the hull, by changing the pressure distribution ahead of the bow. Because of the nature of its destructive interference with the bow wave, there is a limited range of vessel speeds over which it is effective. A bulbous bow must be properly designed to mitigate the wave-making resistance of a particular hull over a particular range of speeds. A bulb that works for one vessel's hull shape and one range of speeds could be detrimental to a different hull shape or a different speed range. Proper design and knowledge of a ship's intended operating speeds and conditions is therefore necessary when designing a bulbous bow. Hull form filtering If the hull is designed to operate at speeds substantially lower than hull speed then it is possible to refine the hull shape along its length to reduce wave resistance at one speed. This is practical only where the block coefficient of the hull is not a significant issue. Semi-displacement and planing hulls Since semi-displacement and planing hulls generate a significant amount of lift in operation, they are capable of breaking the barrier of the wave propagation speed and operating in realms of much lower drag, but to do this they must be capable of first pushing past that speed, which requires significant power. This stage is called the transition stage and at this stage the rate of wave-making resistance is the highest. Once the hull gets over the hump of the bow wave, the rate of increase of the wave drag will start to reduce significantly. The planing hull will rise up clearing its stern off the water and its trim will be high. Underwater part of the planing hull will be small during the planing regime. A qualitative interpretation of the wave resistance plot is that a displacement hull resonates with a wave that has a crest near its bow and a trough near its stern, because the water is pushed away at the bow and pulled back at the stern. A planing hull simply pushed down on the water under it, so it resonates with a wave that has a trough under it. If it has about twice the length it will therefore have only square root (2) or 1.4 times the speed. In practice most planing hulls usually move much faster than that. At four times hull speed the wavelength is already 16 times longer than the hull. See also Ship resistance and propulsion Hull (watercraft)#Categorisation Hull speed References On the subject of high speed monohulls, Daniel Savitsky, Professor Emeritus, Davidson Laboratory, Stevens Institute of Technology Fluid dynamics Water waves Naval architecture
Wave-making resistance
[ "Physics", "Chemistry", "Engineering" ]
1,708
[ "Naval architecture", "Physical phenomena", "Water waves", "Chemical engineering", "Waves", "Marine engineering", "Piping", "Fluid dynamics" ]
6,980,583
https://en.wikipedia.org/wiki/International%20Commission%20on%20Radiological%20Protection
The International Commission on Radiological Protection (ICRP) is an independent, international, non-governmental organization, with the mission to protect people, animals, and the environment from the harmful effects of ionising radiation. Its recommendations form the basis of radiological protection policy, regulations, guidelines and practice worldwide. The ICRP was effectively founded in 1928 at the second International Congress of Radiology in Stockholm, Sweden but was then called the International X-ray and Radium Protection Committee (IXRPC). In 1950 it was restructured to take account of new uses of radiation outside the medical area and re-named as the ICRP. The ICRP is a sister organisation to the International Commission on Radiation Units and Measurements (ICRU). In general terms ICRU defines the units, and ICRP recommends, develops and maintains the International system of radiological protection which uses these units. Operation The ICRP is a not-for-profit organization registered as a charity in the United Kingdom and has its scientific secretariat in Ottawa, Ontario, Canada. It is an independent, international organization with more than two hundred volunteer members from approximately thirty countries on six continents, who represent the world's leading scientists and policy makers in the field of radiological protection. The International System of Radiological Protection has been developed by ICRP based on the current understanding of the science of radiation exposures and effects, and value judgements. These value judgements take into account societal expectations, ethics, and experience gained in application of the system. The work of the Commission centres on the operation of four main committees: Committee 1 Radiation Effects Committee 1 considers the effects of radiation action from the subcellular to population and ecosystem levels, including the induction of cancer, heritable and other diseases, impairment of tissue/organ function and developmental defects, and assesses implications for protection of people and the environment. Committee 2 Doses from Radiation Exposure Committee 2 develops dosimetric methodology for the assessment of internal and external radiation exposures, including reference biokinetic and dosimetric models and reference data and dose coefficients, for use in the protection of people and the environment. Committee 3 Radiological Protection in Medicine Committee 3 addresses protection of persons and unborn children when ionising radiation is used in medical diagnosis, therapy, and biomedical research, as well as protection in veterinary medicine. Committee 4 Application of the Commission's Recommendations Committee 4 provides advice on the application of the Commission's recommendations for the protection of people and the environment in an integrated manner for all exposure situations. Supporting these committees are Task Groups, established primarily to develop ICRP publications. The ICRP's key output is the production of regular publications disseminating information and recommendations through the "Annals of the ICRP". International Symposia These have become one of the main means of communicating advances by the ICRP in the form of technical presentations and reports from various committees drawn from the international radiological protection community. They have been held every two years since 2011. 1st International ICRP symposium 2011. Key areas of focus: Various. 2nd International ICRP symposium 2013. Key areas of focus: science, NORM, emergency preparedness and recovery, medicine, environment. 3rd International ICRP symposium 2015. Key areas of focus: Medicine, science and ethics 4th International ICRP symposium 2017. Key areas of focus: Recovery after nuclear accidents 5th International symposium 2019. Key areas of focus: Mines, Medicine and Space travel. History Early dangers A year after Röntgen's discovery of X-rays in 1895, the American engineer Wolfram Fuchs gave what was probably the first radiation protection advice, but many early users of X-rays were initially unaware of the hazards and protection was rudimentary or non-existent. The dangers of radioactivity and radiation were not immediately recognized. The discovery of X‑rays had led to widespread experimentation by scientists, physicians, and inventors, but many people began recounting stories of burns, hair loss and worse in technical journals as early as 1896. In February 1896 Professor Daniel and Dr. Dudley of Vanderbilt University performed an experiment involving X-raying Dudley's head that resulted in his hair loss. A report by Dr. H.D. Hawks, a graduate of Columbia College, of his suffering severe hand and chest burns in an x-ray demonstration, was the first of many other reports in Electrical Review. Many experimenters including Elihu Thomson at Thomas Edison's lab, William J. Morton, and Nikola Tesla also reported burns. Elihu Thomson deliberately exposed a finger to an X-ray tube over a period of time and suffered pain, swelling, and blistering. Other effects, including ultraviolet rays and ozone were sometimes blamed for the damage. Many physicians claimed that there were no effects from X-ray exposure at all. Emergence of international standards – the ICR Wide acceptance of ionizing radiation hazards was slow to emerge, and it was not until 1925 that the establishment of international radiological protection standards was discussed at the first International Congress of Radiology (ICR). The second ICR was held in Stockholm in 1928 and the ICRU proposed the adoption of the roentgen unit; and the 'International X-ray and Radium Protection Committee' (IXRPC) was formed. Rolf Sievert was named Chairman, and a driving force was George Kaye of the British National Physical Laboratory. The committee met for just a day at each of the ICR meetings in Paris in 1931, Zurich in 1934, and Chicago in 1937. At the 1934 meeting in Zurich, the Commission was faced with undue membership interference. The hosts insisted on having four Swiss participants (out of a total of 11 members), and the German authorities replaced the Jewish German member with another of their choice. In response to this, the Commission decided on new rules in order to establish full control over its future membership. Birth of ICRP After World War II the increased range and quantity of radioactive substances being handled as a result of military and civil nuclear programmes led to large additional groups of occupational workers and the public being potentially exposed to harmful levels of ionising radiation. Against this background, the first post-war ICR convened in London in 1950, but only two IXRPC members were still active from pre-war days; Lauriston Taylor and Rolf Sievert. Taylor was invited to revive and revise the IXRPC, which included renaming it as the International Commission on Radiological Protection (ICRP). Sievert remained an active member, Sir Ernest Rock Carling (UK) was appointed as Chairman, and Walter Binks (UK) took over as Scientific Secretary because of Taylor's concurrent involvement with the sister organisation, ICRU. At that meeting, six sub-committees were established: permissible dose for external radiation permissible dose for internal radiation protection against X rays generated at potentials up to 2 million volts protection against X rays above 2 million volts, and beta rays and gamma rays protection against heavy particles, including neutrons and protons disposal of radioactive wastes and handling of radioisotopes The next meeting was in 1956 in Geneva. This was the first time that a formal meeting of the Commission took place independently of the ICR. At this meeting, ICRP became formally affiliated with the World Health Organization (WHO) as a 'participating non-governmental organisation'. In 1959, a formal relationship was established with the International Atomic Energy Agency (IAEA), and subsequently with UNSCEAR, the International Labour Office (ILO), the Food and Agriculture Organization (FAO), the International Organization for Standardization (ISO), and UNESCO. At the meeting in Stockholm in May 1962, the Commission also decided to reorganise the committee system in order to improve productivity and four committees were created: C1: Radiation effects; C2: Internal exposure; C3: External exposure; C4: Application of recommendations After many assessments of committee roles within an environment of increasing workloads and changes in societal emphasis, by 2008 the committee structure had become: Committee 1 - Radiation effects Committee Committee 2 - Doses from radiation exposure Committee 3 - Protection in medicine Committee 4 - Application of the Commission's recommendations Committee 5 - Protection of the environment Evolution of recommendations The key output of the ICRP and its historic predecessor has been the issuing of recommendations in the form of reports and publications. The contents are made available for adoption by national regulatory bodies to the extent that they wish. Early recommendations were general guides on exposure and thereby dose limits, and it was not until the nuclear era that a greater degree of sophistication was required. 1951 recommendations In the "1951 Recommendations" the commission recommended a maximum permissible dose of 0.5 roentgen (0.0044 grays) in any 1 week in the case of whole-body exposure to X and gamma radiation at the surface, and 1.5 roentgen (0.013 grays) in any 1 week in the case of exposure of hands and forearms. Maximum permissible body burdens were given for 11 nuclides. At this time it was first stated that the purpose of radiological protection was that of avoiding deterministic effects from occupational exposures, and the principle of radiological protection was to keep individuals below the relevant thresholds. A first recommendation on restrictions of exposures of members of the general public appeared in the commission's part of the 1954 Recommendations. It was also stated that 'since no radiation level higher than the natural background can be regarded as absolutely "safe", the problem is to choose a practical level that, in the light of present knowledge, involves a negligible risk'. However, the Commission had not rejected the possibility of a threshold for stochastic effects. At this time the rad and rem were introduced for absorbed dose and RBE-weighted dose respectively. At its 1956 meeting the concept of a controlled area and radiation safety officer were introduced, and the first specific advice was given for pregnant women. "Publication 1" In 1957, there was pressure on ICRP from both the World Health Organisation and UNSCEAR to reveal all of the decisions from its 1956 meeting in Geneva. The final document, the Commission's 1958 Recommendations was the first ICRP report published by Pergamon Press. The 1958 Recommendations are usually referred to as 'Publication 1'. The significance of stochastic effects began to influence the commission's policy and a new set of recommendations was published as Publication 9 in 1966. However, during development its editors became concerned about the many different opinions on the risk of stochastic effects. The Commission therefore asked a working group to consider these, and their report, Publication 8 (1966), for the first time for the ICRP summarised the current knowledge about radiation risks, both somatic and genetic. Publication 9 then followed, and substantially changed radiation protection emphasis by moving from deterministic to stochastic effects. Reference man In October 1974, the official definition of Reference man was adopted by the ICRP: “Reference man is defined as being between 20-30 years of age, weighing 70 kg, is 170 cm in height, and lives in a climate with an average temperature of from 10 to 20 degrees C. He is a Caucasian and is a Western European or North American in habitat and custom.” The reference man is created for the estimation of radiation doses without adverse health effects. Principles of protection In 1977 Publication 26 set out the new system of dose limitation and introduced the three principles of protection: no practice shall be adopted unless its introduction produces a positive net benefit all exposures shall be kept as low as reasonably achievable, economic and social factors being taken into account the doses to individuals shall not exceed the limits recommended for the appropriate circumstances by the Commission These principles have since become known as justification, optimisation (as low as reasonably achievable), and the application of dose limits. The optimisation principle was introduced because of the need to find some way of balancing costs and benefits of the introduction of a radiation source involving ionising radiation or radionuclides. The 1977 Recommendations were very concerned with the ethical basis of how to decide what is reasonably achievable in dose reduction. The principle of justification aims to do more good than harm, and that of optimisation aims to maximise the margin of good over harm for society as a whole. They therefore satisfy the utilitarian ethical principle proposed primarily by Jeremy Bentham and John Stuart Mill. Utilitarians judge actions by their overall consequences, usually by comparing, in monetary terms, the relevant benefits obtained by a particular protective measure with the net cost of introducing that measure. On the other hand, the principle of applying dose limits aims to protect the rights of the individual not to be exposed to an excessive level of harm, even if this could cause great problems for society at large. This principle therefore satisfies the Deontological principle of ethics, proposed primarily by Immanuel Kant. Consequently, the concept of the collective dose was introduced to facilitate cost–benefit analysis and to restrict the uncontrolled build-up of exposure to long-lived radio nuclides in the environment. With the global expansion of nuclear reactors and reprocessing it was feared global doses could again reach the levels seen from atmospheric testing of nuclear weapons. So, by 1977, the establishment of dose limits was secondary to the establishment of cost–benefit analysis and use of collective dose. Re-evaluation of doses During the 1980s, there were re-evaluations of the survivors of the atomic bombings of Hiroshima and Nagasaki, partly due to revisions in the dosimetry. The risks of exposure were claimed to be higher than those used by ICRP, and pressures began to appear for a reduction in dose limits. By 1989, the commission had itself revised upwards its estimates of the risks of carcinogenesis from exposure to ionising radiation. The following year, it adopted its 1990 Recommendations for a 'system of radiological protection'. The principles of protection recommended by the Commission were still based on the general principles given in Publication 26. However, there were important additions which weakened the link to cost benefit analysis and collective dose, and strengthened the protection of the individual, which reflected changes in societal values: No practice involving exposures to radiation should be adopted unless it produces sufficient benefit to the exposed individuals or to society to offset the radiation detriment it causes. (The justification of a practice) In relation to any particular source within a practice, the magnitude of individual doses, the number of people exposed, and the likelihood of incurring exposures where these are not certain to be received should all be kept as low as reasonably achievable, economic and social factors being taken into account. This procedure should be constrained by restrictions on the doses to individuals (dose constraints), or on the risks to individuals in the case of potential exposures (risk constraints) so as to limit the inequity likely to result from the inherent economic and social judgements. (The optimisation of protection) The exposure of individuals resulting from the combination of all the relevant practices should be subject to dose limits, or to some control of risk in the case of potential exposures. These are aimed at ensuring that no individual is exposed to radiation risks that are judged to be unacceptable from these practices in any normal circumstances. 21st century In the 21st century, the latest overall recommendations on an international system of radiological protection appeared. ICRP Publication 103 (2007), after two phases of international public consultation, has resulted in more continuity than change. Some recommendations remain because they work and are clear, others have been updated because understanding has evolved, some items have been added because there has been a void, and some concepts are better explained because more guidance is needed. Radiation quantities In collaboration with the ICRU, the commission has assisted in defining the use of many of the dose quantities in the accompanying diagram. The table below shows the number of different units for various quantities and is indicative of changes of thinking in world metrology, especially the movement from cgs to SI units. Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for "public health ... purposes" be phased out by 31 December 1985. Awards ICRP issues two awards the Bo Lindell Medal which is awarded annually and the Gold Medal for Radiation Protection which is issued every four years since 1962. Gold Medal for Radiation Protection The recipients of the gold medal for Radiation Protection are listed below: 2020: Dale Preston 2016: Ethel Gilbert 2012: Keith Eckerman 2008: K Sankaranarayanan 2004: Richard Doll 2000: Angelina Guskova 1993: I Shigematsu 1989: 1985: S Takahashi 1981: Edward E. Pochin 1973: Lauriston S. Taylor 1965: William Valentine Mayneord 1962: W Binks & Karl Z. Morgan Bo Lindell Medal The recipients of the Bo Lindell Medal for the Promotion of Radiological Protection are listed below: 2021: Haruyuki Ogino (Japan) 2019: Elizabeth Ainsbury (UK) 2018: Nicole E. Martinez (USA) See also Journal of Radiological Protection (JRP) -The peer-reviewed scientific publication devoted to radiological protection. gray (unit) - Physical dose unit, used for comparison of deterministic health effect Health Physics Society - USA professional body for radiological protection International Radiation Protection Association (IRPA) -The worldwide umbrella body for national radiological protection organisations International Commission on Radiation Units and Measurements - Devoted to the development and maintenance of international measurement standards and techniques National Council on Radiation Protection and Measurements of the United States sievert - Biological dose unit, used for comparison of stochastic health effect Society for Radiological Protection - the IRPA-affiliated national professional radiological protection organisation for UK William Herbert Rollins - Radiation protection pioneer, and the first to conduct controlled experiments into the hazards of X-rays. References External links Eurados - The European radiation dosimetry group "The confusing world of radiation dosimetry" - M.A. Boyd, U.S. Environmental Protection Agency. An account of chronological differences between USA and ICRP dosimetry systems. Full text of ICRP report 103 (2007) These revised Recommendations for a System of Radiological Protection formally replace the Commission's 1990 recommendations. Organizations established in 1928 International nuclear energy organizations International medical and health organizations Nuclear safety and security Nuclear medicine organizations Radiation protection Standards organizations in Canada
International Commission on Radiological Protection
[ "Engineering" ]
3,769
[ "International nuclear energy organizations", "Nuclear medicine organizations", "Nuclear organizations" ]
6,980,928
https://en.wikipedia.org/wiki/Thermal%20expansion%20valve
A thermal expansion valve or thermostatic expansion valve (often abbreviated as TEV, TXV, or TX valve) is a component in vapor-compression refrigeration and air conditioning systems that controls the amount of refrigerant released into the evaporator and is intended to regulate the superheat of the refrigerant that flows out of the evaporator to a steady value. Although often described as a "thermostatic" valve, an expansion valve is not able to regulate the evaporator's temperature to a precise value. The evaporator's temperature will vary only with the evaporating pressure, which will have to be regulated through other means (such as by adjusting the compressor's capacity). Thermal expansion valves are often referred to generically as "metering devices", although this may also refer to any other device that releases liquid refrigerant into the low-pressure section but does not react to temperature, such as a capillary tube or a pressure-controlled valve. Theory of operation A thermal expansion valve is a key element to a heat pump; this is the cycle that makes air conditioning, or air cooling, possible. A basic refrigeration cycle consists of four major elements: a compressor, a condenser, a metering device and an evaporator. As a refrigerant passes through a circuit containing these four elements, air conditioning occurs. The cycle starts when refrigerant enters the compressor in a low-pressure, moderate-temperature, gaseous form. The refrigerant is compressed by the compressor to a high-pressure and high-temperature gaseous state. The high-pressure and high-temperature gas then enters the condenser. The condenser cools the high-pressure and high-temperature gas allowing it to condense to a high-pressure liquid by transferring heat to a lower temperature medium, usually ambient air. In order to produce a cooling effect from the higher pressure liquid, the flow of refrigerant entering the evaporator is restricted by the expansion valve, reducing the pressure and allowing isenthalpic expansion back into the vapor phase to take place, which absorbs heat and results in cooling. A TXV type expansion device has a sensing bulb that is filled with a liquid whose thermodynamic properties are similar to those of the refrigerant. This bulb is thermally connected to the output of the evaporator so that the temperature of the refrigerant that leaves the evaporator can be sensed. The gas pressure in the sensing bulb provides the force to open the TXV, and as the temperature drops this force will decrease, therefore dynamically adjusting the flow of refrigerant into the evaporator. The superheat is the excess temperature of the vapor above its boiling point at the evaporating pressure. No superheat indicates that the refrigerant is not being fully vaporized within the evaporator and liquid may end up recirculated to the compressor which is inefficient and can cause damage. On the other hand, excessive superheat indicates that there is insufficient refrigerant flowing through the evaporator coil, and thus a significant portion toward the end is not providing cooling. Therefore, by regulating the superheat to a small value, typically only a few °C, the heat transfer of the evaporator will be near optimal, without excess liquid refrigerant being returned to the compressor. In order to provide an appropriate superheat, a spring force is often applied in the direction that would close the valve, meaning that the valve will close when the bulb is at a lower temperature than the refrigerant is evaporating at. Spring-type valves may be fixed, or adjustable, although other methods to ensure a superheat also exist, such as the sensing bulb having a different vapor composition to the rest of the system. Some thermal expansion valves are also specifically designed to ensure that a certain minimum flow of refrigerant can always flow through the system, while others can also be designed to control the evaporator's pressure so that it never rises above a maximum value. Description Flow control, or metering, of the refrigerant is accomplished by use of a temperature sensing bulb, filled with a gas or liquid charge similar to the one inside the system, that causes the orifice in the valve to open against the spring pressure in the valve body as the temperature on the bulb increases. As the suction line temperature decreases, so does the pressure in the bulb and therefore on the spring, causing the valve to close. An air conditioning system with a TX valve is often more efficient than those with designs that do not use one. Also, TX valve air conditioning systems do not require an accumulator (a refrigerant tank placed downstream of the evaporator's outlet), since the valves reduce the liquid refrigerant flow when the evaporator's thermal load decreases, so that all the refrigerant completely evaporates inside the evaporator (in normal operating conditions such as a proper evaporator temperature and airflow). However, a liquid refrigerant receiver tank needs to be placed in the liquid line before the TX valve so that, in low evaporator thermal load conditions, any excess liquid refrigerant can be stored inside it, preventing any liquid from backflowing inside the condenser coil from the liquid line. At heat loads which are very low compared to the valve's power rating, the orifice can become oversized for the heat load, and the valve can begin to repeatedly open and close, in an attempt to control the superheat to the set value, making the superheat oscillate. Cross charges, that is, sensing bulb charges composed of a mixture of different refrigerants or also non-refrigerant gases such as nitrogen (as opposed to a charge composed exclusively of the same refrigerant inside the system, known as a parallel charge), set so that the vapor pressure vs temperature curve of the bulb charge "crosses" the vapor pressure vs temperature curve of the system's refrigerant at a certain temperature value (that is, a bulb charge set so that, below a certain refrigerant temperature, the vapor pressure of the bulb charge suddenly becomes higher than that of the system's refrigerant, forcing the metering pin to stay into an open position), help to reduce the superheat hunt phenomenon by preventing the valve orifice from completely closing during system operation. The same result can be attained through different kinds of bleed passages that generate a minimum refrigerant flow at all times. The cost, however, is determining a certain flow of refrigerant that will not reach the suction line in a fully evaporated state while the heat load is particularly low, and that the compressor must be designed to handle. By carefully selecting the amount of a liquid sensing bulb charge, a so-called MOP (maximum operating pressure) effect can be also attained; above a precise refrigerant temperature, the sensing bulb charge will be entirely evaporated, making the valve begin restricting flow irrespective of the sensed superheat, rather than increasing it in order to bring evaporator superheat down to the target value. Therefore, the evaporator pressure will be kept from increasing above the MOP value. This feature helps to control the compressor's maximum operating torque to a value that is acceptable for the application, such as a small displacement car engine. A low refrigerant charge condition is often accompanied when the compressor is operational by a loud whooshing sound heard from the thermal expansion valve and the evaporator, which is caused by the lack of a liquid head right before the valve's moving orifice, resulting in the orifice trying to meter a vapor or a vapor/liquid mixture instead of a liquid. Types There are two main types of thermal expansion valves: internally or externally equalized. The difference between externally and internally equalized valves is how the evaporator pressure affects the position of the needle. In internally equalized valves, the evaporator pressure against the diaphragm is the pressure at the inlet of the evaporator (typically via an internal connection to the outlet of the valve), whereas in externally equalized valves, the evaporator pressure against the diaphragm is the pressure at the outlet of the evaporator. Externally equalized thermostatic expansion valves compensate for any pressure drop through the evaporator. For internally equalised valves a pressure drop in the evaporator will have the effect of increasing the superheat. Internally equalized valves can be used on single circuit evaporator coils having low-pressure drop. If a refrigerant distributor is used for multiple parallel evaporators (rather than a valve on each evaporator) then an externally equalized valve must be used. Externally equalized TXVs can be used on all applications; however, an externally equalized TXV cannot be replaced with an internally equalized TXV. For automotive applications, a type of externally equalized thermal expansion valve, known as the block type valve, is often used. In this type, either a sensing bulb is located within the suction line connection within the valve body and is in constant contact with the refrigerant that flows out of the evaporator's outlet, or a heat transfer means is provided so that the refrigerant is able to exchange heat with the sensing charge contained in a chamber located above the diaphragm as it flows to the suction line. Although the bulb/diaphragm type is used in most systems that control the refrigerant superheat, electronic expansion valves are becoming more common in larger systems or systems with multiple evaporators to allow them to be adjusted independently. Although electronic valves can provide greater control range and flexibility that bulb/diaphragm types cannot provide, they add complexity and points of failure to a system as they require additional temperature and pressure sensors and an electronic control circuit. Most electronic valves use a stepper motor hermetically sealed inside the valve to actuate a needle valve with a screw mechanism, on some units only the stepper rotor is within the hermetic body and is magnetically driven through the sealed valve body by stator coils on the outside of the device. References Further reading How does a TEV work? Valves Cooling technology
Thermal expansion valve
[ "Physics", "Chemistry" ]
2,175
[ "Physical systems", "Valves", "Hydraulics", "Piping" ]
6,980,942
https://en.wikipedia.org/wiki/Noxious%20weed
A noxious weed, harmful weed or injurious weed is a weed that has been designated by an agricultural or other governing authority as a plant that is harmful to agricultural or horticultural crops, natural habitats or ecosystems, or humans or livestock. Most noxious weeds have been introduced into an ecosystem by ignorance, mismanagement, or accident. Some noxious weeds are native, though many localities define them as necessarily being non-native. Typically they are plants that grow aggressively, multiply quickly without natural controls (native herbivores, soil chemistry, etc.), and display adverse effects through contact or ingestion. Noxious weeds are a large problem in many parts of the world, greatly affecting areas of agriculture, forest management, nature reserves, parks and other open space. Many noxious weeds have come to new regions and countries through contaminated shipments of feed and crop seeds or were intentionally introduced as ornamental plants for horticultural use. Some "noxious weeds", such as ragwort, produce copious amounts of nectar, valuable for the survival of bees and other pollinators, or other advantages like larval host foods and habitats. In the US, wild parsnip Pastinaca sativa, for instance, provides large tubular stems that some bee species hibernate in, larval food for two different swallowtail butterflies, and other beneficial qualities. Types Some noxious weeds are harmful or poisonous to humans, domesticated grazing animals, and wildlife. Open fields and grazing pastures with disturbed soils and open sunlight are often more susceptible. Protecting grazing animals from toxic weeds in their primary feeding areas is therefore important. There are marine, terrestrial, and parasitic noxious weeds. Control Some guidelines to prevent the spread of noxious weeds are: Avoid driving through noxious weed-infested areas. Avoid transporting or planting seeds and plants that one cannot identify. For noxious weeds in flower or with seeds on plants, pulling 'gently' out and placing in a secure closable bag is recommended. Disposal such as hot composting or contained burning is done when safe and practical for the specific plant. Burning poison ivy can be fatal to humans. Using only certified weed-free seeds for crops or gardens. Maintaining control of noxious weeds is important for the health of habitats, livestock, wildlife, and native plants, and of humans of all ages. How to control noxious weeds depends on the surrounding environment and habitats, the weed species, the availability of equipment, labor, supplies, and financial resources. Laws often require that noxious weed control funding from governmental agencies must be used for eradication, invasion prevention, or native habitat and plant community restoration project scopes. Insects and fungi have long been used as biological controls of some noxious weeds and more recently nematodes have also been used. Eradication According to control experts, there are chemical, physical ways, and environmental ways of eradicating noxious weeds. Those include pulling the entire weed out of the ground, spraying herbicide if it's a large area, and using machines to turn over the soil. According to farmers, using goats can prove a more ecological way of getting rid of noxious weeds, instead of using herbicide. Also, overplanting a native species is a long term solution in eradicating noxious weeds. Controversy and biases Agricultural needs, desires, and concerns do not always mesh with those of other areas, such as pollinator nectar provision. Ragwort, for instance, was rated as the top flower meadow nectar source in a UK study, and in the top ten in another. Its early blooming period is also particularly helpful for the establishment of bumblebee colonies. Thistles that are considered noxious weeds in the US and elsewhere, such as Cirsium arvense and Cirsium vulgare, have also rated at or near the top of the charts in multiple UK studies for nectar production, one of its native locations. These thistles also serve as a larval host plant for the painted lady butterfly. There can be, therefore, a conflict between agricultural policy and point of view and the point of view of conservationists or other groups. By country Australia In Australia, the term "noxious weed" is used by state and territorial governments. Some noxious weeds in Australia are Alligator weeds, Horsetails, and Branched broomrape. The government of Victoria will get rid of all these plants for free. Alligator weeds are banned in all the states and territories of Australia. They can create large mats that can cause considerable blockages of waterways. Horsetails are poisonous to livestock. They are also extremely challenging to eradicate, as they can fragment off and the fragmented pieces can grow new plants. Kind of like succulents. Branched broomrapes are parasitic noxious weeds. They attract themselves to the roots of other plants and extract water and nutrients. Canada In Canada, constitutional responsibility for the regulation of agriculture and the environment is shared between the federal and provincial governments. The federal government through the Canadian Food Inspection Agency (CFIA) regulates invasive plants under the authority of the Plant Protection Act, the Seeds Act and statutory regulations. Certain plant species have been designated by the CFIA as noxious weeds in the Weed Seeds Order. Each province also produces its own list of prohibited weeds. In Alberta, for example, a new Weed Control Act was proclaimed in 2010 with two weed designations: "prohibited noxious" (46 species) which are banned across Alberta, and "noxious" (29 species) which can be restricted at the discretion of local authorities. New Zealand New Zealand has had a series of Acts of Parliament relating to noxious weeds: the Noxious Weeds Act 1908, the Noxious Weeds Act 1950, and the Noxious Plants Act 1978. The last was repealed by the Biosecurity Act 1993, which used words such as "pest", "organism" and "species", rather than "noxious". Consequently, the term "noxious weed" is no longer used in official publications in New Zealand. According to this Act, control of the majority of problem weeds, now called 'pest plants', is the responsibility of Regional Councils, or unitary authorities, in a few councils. Some common noxious weeds in New Zealand are Broad-Leaved Dock, English Ivy, and Oxalis. These plants may be aesthetically pleasing, but they smother native plants and are hard to eradicate. United Kingdom The Weeds Act 1959 (7 & 8 Eliz. 2. c. 54) covers Great Britain, It is mainly relevant to farmers and other rural settings rather than the allotment or garden-scale growers. Five "injurious" weeds are listed. The word "injurious" means in this context harmful to agriculture, not liable to cause injury. All the species listed apart from ragwort are edible and appear in Richard Mabey's book Food for Free. They are all native plants. These are: Spear thistle (Cirsium vulgare) Creeping, or field, thistle (Cirsium arvense) Curled dock (Rumex crispus) Broad-leaved dock (Rumex obtusifolius) Common ragwort (Jacobaea vulgaris) The Department for Environment, Food and Rural Affairs (DEFRA) provides guidance for the removal of these weeds from infested land. Much of this is oriented towards the use of herbicides. The act does not place any automatic legal responsibility on landowners to control the weeds, or make growing them illegal, but they may be ordered to control them. Most common farmland weeds are not "injurious" within the meaning of the Weeds Act and many such plant species have conservation and environmental value. The various UK government agencies responsible have a duty to try to achieve a reasonable balance among different interests. These include agriculture, countryside conservation and the general public. Section 14 of the Wildlife and Countryside Act 1981 makes it an offence to plant or grow certain specified foreign invasive plants in the wild, listed in schedule 9 of the act, including giant hogweed and Japanese knotweed. Some local authorities have by-laws controlling these plants. There is no statutory requirement for landowners to remove these plants from their property. Northern Ireland is covered by the Noxious Weeds (Northern Ireland) Order 1977 (NISI 1977/52). This mirrors the Great Britain legislation, and covers the same five species, with the addition of: Wild oat (Avena fatua) Wild oat (Avena ludoviciana) United States The federal government defines noxious weeds under the Federal Noxious Weed Act of 1974. Noxious weeds are also defined by the state governments in the United States. Noxious weeds came to the U.S. by way of colonization. Some wildflowers are lesser known noxious weeds. A few of them are banned in certain states. For example, the Ox-eye daisy came to the Americas over in colonizers' seed bags and has become the common daisy seen at roadsides. It is prohibited in 10 states for agriculture, and is the most banned out of any wildflower. See also Caulerpa taxifolia Invasive species International Plant Protection Convention References External links Australia Noxious Weeds List at Weeds Australia New Zealand United States Weeds at the Bureau of Land Management (US) Noxious Weed Program at the US Department of Agriculture — Weeds Agricultural pests Garden pests Habitat
Noxious weed
[ "Biology" ]
1,916
[ "Garden pests", "Pests (organism)", "Weeds", "Agricultural pests" ]
6,980,968
https://en.wikipedia.org/wiki/FastContact
FastContact is an algorithm for the rapid estimate of contact and binding free energies for protein–protein complex structures. It is based on a statistically determined desolvation contact potential and Coulomb electrostatics with a distance-dependent dielectric constant. The application also reports residue contact free energies that rapidly highlight the hotspots of the interaction. The programme was written in Fortran 77 by Carlos J. Camacho and Chao Zhang at the Department of Computational Biology, University of Pittsburgh, PA. A web server for running FastContact online or downloading the binary was set up by P. Christoph Champ in July 2005. References External links FastContact binaries — binaries are freely available for download (with documentation). FastContact Server — set up by P. Christoph Champ in July 2005. FastContact Wiki Bioinformatics Fortran software
FastContact
[ "Chemistry", "Engineering", "Biology" ]
185
[ "Biological engineering", "Bioinformatics stubs", "Biotechnology stubs", "Biochemistry stubs", "Bioinformatics" ]
6,981,108
https://en.wikipedia.org/wiki/Design%20and%20Art%20Direction
Design and Art Direction (D&AD), formerly known as British Design and Art Direction, is a British educational organisation that was created in 1962 to promote excellence in design and advertising. Its main offices are in Spitalfields in London. It is most famous for its annual awards, the D&AD Pencils. The highest award given by D&AD, the Black Pencil, is not necessarily awarded every year. History Origins (1962–1977) D&AD was founded in 1962 by a group of London-based designers and art directors including David Bailey, Terence Donovan, Alan Fletcher, and Colin Forbes (who designed the original D&AD logo). A panel of 25 judged the 2500 entries to the first awards in 1963. They awarded one Black Pencil (to Geoffrey Jones Films) and 16 Yellow Pencils. Early winners received an ebony pencil box designed by Marcello Minale, one of the founding partners of Minale Tattersfield, which contained a pencil with silver lettering. In 1966 it was replaced by a more durable award. Its education programmes in their infancy, D&AD launched graphic workshops in association with the Royal College of Art in the mid-1960s. They ran until the mid-1970s. Designer Michael Wolff became the first elected president of D&AD in 1970. Six years later, then-president Alan Parker gave the first D&AD President’s Award for outstanding contribution to creativity to Colin Millward of Collett Dickenson Pearce. Introduction of Student Awards (1978–1990) D&AD education programmes continued to grow in 1978 when Dave Trott set up the D&AD Advertising Workshops. In 1979, initiated by Sir John Hegarty of Bartle Bogle Hegarty, the Student Awards were launched. Bridging the gap between college and work, the awards present students with real world briefs to tackle. The awards had already started to recognise a wider range of categories through the 1960s and 1970s and photography, retail design (now environmental design), music videos, and product design became part of the awards in the 1980s. The awards also opened up to international entries for the first time in 1988. Controversy surrounded a decision to hold separate advertising and design awards in 1986 and 1987; the separation, made for practical reasons based on the chosen venue, was seen by members as a split between industries. Afterward, the ceremony did come back under one roof, where it has remained. Move to Graphite Square, Vauxhall (1990–2012) D&AD moved to Graphite Square, in Vauxhall in the 1990s. The first Student Expo (now New Blood) and the University Network, the D&AD membership programme for university and college courses, launched in 1993. The first session of "Xchange" took place in 1996. It was described as a ‘summer school’ for college lecturers and creative practitioners updated participants on the latest industry trends. D&AD launched its website in 1996 and introduced its first digital categories to the awards in 1997. D&AD celebrated its 40th birthday in 2002 with Rewind, a retrospective exhibition and book of some of the most iconic work since the 1960s at the Victoria & Albert Museum. A new benchmark was set at the turn of the century when a double Black Pencil was awarded to the AMV.BBDO ‘Surfer’ for Guinness for its visuals. This was matched five years later by ‘Grrr’, Wieden + Kennedy London's work for Honda UK. In 2006 another milestone was set as leoburnett.com won the first digital Black Pencil. Developments in the industry meant that two new categories were added in 2008, broadcast innovations and mobile marketing. That year, Apple Inc. won a Black Pencil for the iMac and the first-generation iPhone. Design Workshops were relaunched in 2006 and D&AD North, its first regional network, in Manchester the same year. The Student Awards have become an increasingly international event. Entries in 2007 came from colleges in more than 40 countries. Italian design group Fabrica designed The annual outside the UK for the first time in 2007 and the showreel moved online that same year. 50th year and beyond (2012–present) In 2012, D&AD moved to a location on Hanbury Street. It celebrated its 50th anniversary in 2012 by honouring the most successful award-winners in its history with a special edition Taschen D&AD Annual featuring 50 different covers. In 2017, D&AD moved to a new office and event space on Cheshire Street, London. In October 2024, D&AD appointed its first US-based President, Kwame Taylor-Hayford. D&AD Pencil Awards The following are the D&AD Pencil award levels: D&AD Wood Pencil: For best in advertising and design from the year D&AD Graphite Pencil: Awarded to stand-out work, well executed with an original idea at its core D&AD Yellow Pencil: Awarded only to the most outstanding work that achieves true creative excellence D&AD Black Pencil: For work that is ground-breaking in its field; not always awarded D&AD White Pencil: For exceptional and game-changing projects that have resulted in significant impact D&AD presidents Each year the D&AD elects a president from the creative community. They are always D&AD Award winners. See also History of advertising in Britain References Further reading Rewind: 40 years of Design and Advertising by Jeremy Myerson and Graham Vickers; Publisher: Phaidon Press; . The Copy Book, 1995 / 2011, Publisher: Taschen, D&AD 50 by D&AD; Publisher: Taschen; . External links D&AD website D&AD Student Awards website Organizations established in 1962 1962 establishments in England Organisations based in the London Borough of Tower Hamlets Educational charities based in the United Kingdom Design awards
Design and Art Direction
[ "Engineering" ]
1,176
[ "Design", "Design awards" ]
6,981,275
https://en.wikipedia.org/wiki/Gavi%20Gangadhareshwara%20Temple
Gavi Gangadhareshwara Temple, or Sri Gangaadhareshwara, also Gavipuram Cave Temple, an example of Indian rock-cut architecture, is located in Bengaluru in the state of Karnataka in India. The temple is famous for its mysterious stone discs in the forecourt and the exact planning allowing the sun to shine on the shrine during certain time of the year. It was built in the 16th century by Kempe Gowda I, the founder of the city. Temple history This cave temple dedicated to Shiva. It is believed to have been built by Gautama Maharishi and Bharadwaja Muni in the Vedic period. It was later renovated in the 16th century CE by Kempe Gowda, the founder of Bengaluru. One of the oldest temples in Bengaluru, Gavi Gangadhareshwara temple was built by Kempe Gowda in recognition after being released from a prison term of five years by Rama Raya. The temple Gavi is an architectural marvel that attracts the faithful by the hordes. Temple architecture Built in a natural cave in Gavipuram, the temple is dedicated to Lord Shiva and cut into a monolithic stone. The courtyard of the temple contains several monolithic sculptures. The main attractions of Gavi Gangadhareshvara temple are two granite pillars that support the giant disk of the sun and moon, and two pillars having several carvings of Nandi in a sitting posture at the top. The temple is also known for its four monolithic pillars, representing Damaru, Trishul and two large circular discs on the patio. Two paintings dated 1 May 1792 CE by Thomas and William Daniell brothers shows that the temple has gone through some construction work with new walls and enclosures. Deities inside the Temple The temple complex has numerous shrines for various detities in addition to the main deity Gavi Gangadhareswara. Special aspects of the Temple Curative effects The idol of Agnimurthi inside the temple has two heads, seven hands and three legs. It is believed that worship of the deity would cure defects of the eye. Illumination of sanctum by the Sun On the occasion of Makar Sankranti, the temple witnesses a unique phenomenon in the evening where sunlight passes through an arc between the horns of Nandi and falls directly on the linga inside the cave and illuminating the interior idol for an hour. Lakhs of devotees come in mid January every year on Makar sankranti day to this cave temple. Comparison of contemporary structures and earlier drawings by Thomas Daniell and William Daniell show that earlier the temple had fewer structures and the Sun illuminated the shrine in summer and winter solstice. Of late, the Sun illuminates Shivalinga two times per year - from 13 to 16 January in late afternoons and from 26 November to 2 December. Tunnel from temple People believe that there is a tunnel which may lead to Kashi. However, it is believed that two men named Nishant and Prem went into the tunnel and never returned. Protected temple The temple shrine is a protected monument under the Karnataka Ancient and Historical Monuments, and Archaeological Sites and Remains Act 1961. Gallery Vintage Paintings The temple saw numerous colonial artists painting different scenes over the years. Nearby holy places Gosaayi Math Samadhi of yogi Bjt Narayan Maharaj - Located just behind the temple. Sri Bande Mahakali Temple See also Notes Further reading External links Gavi Gangadhareshwara - BangaloreTourism.org Hindu cave temples in Karnataka Hindu temples in Bengaluru Shiva temples in Karnataka Archaeoastronomy
Gavi Gangadhareshwara Temple
[ "Astronomy" ]
728
[ "Archaeoastronomy", "Astronomical sub-disciplines" ]
6,981,516
https://en.wikipedia.org/wiki/Entrainment%20%28engineering%29
In engineering, entrainment is the entrapment of one substance by another substance. For example: The entrapment of liquid droplets or solid particulates in a flowing gas, as with smoke. The entrapment of gas bubbles or solid particulates in a flowing liquid, as with aeration. Given two mutually insoluble liquids, the emulsion of droplets of one liquid into the other liquid, as with margarine. Given two gases, the entrapment of one gas into the other gas. "Air entrainment" – The intentional entrapment of air bubbles into concrete. Entrainment defect in metallurgy, as a result of folded pockets of oxide inside the melt. See also Souders–Brown equation References Chemical engineering
Entrainment (engineering)
[ "Chemistry", "Engineering" ]
160
[ "Chemical engineering", "nan" ]
6,981,653
https://en.wikipedia.org/wiki/Endangered%20species%20recovery%20plan
An endangered species recovery plan, also known as a species recovery plan, species action plan, species conservation action, or simply recovery plan, is a document describing the current status, threats and intended methods for increasing rare and endangered species population sizes. Recovery plans act as a foundation from which to build a conservation effort to preserve animals which are under threat of extinction. More than 320 species have died out and the world is continuing a rate of 1 species becoming extinct every two years. Climate change is also linked to several issues relating to extinct species and animals' quality of life. History The United States Congress said in 1973 that endangered species "are of aesthetic, ecological, educational, historical, recreational, and scientific value to the Nation and its people." They therefore set laws to protect endangered species. Section 4(f) of the United States Endangered Species Act from 1973 directs the Secretary of the Interior and the Secretary of Commerce to develop and implement recovery plans to promote the conservation of endangered and threatened species. The Species Survival Commission's Specialist Groups of the International Union for Conservation of Nature (IUCN) has created Species Action Plans since at least the mid-1980s, which are used to outline the conservation strategies of species, normally between set dates. In June 2021, the IUCN produced their Global Species Action Plan (GSAP) Briefing Paper, to prepare for the introduction of the GSAP at the IUCN World Conservation Congress in September 2021. This plan "brings together an outline of the species conservation actions required to implement the Post-2020 Global Biodiversity Framework, with supporting tools and guidelines", and aims to reach targets set for 2030. Aims and functions Recovery plans set out the research and management actions necessary to stop the decline of, and support the recovery of, listed threatened species or threatened ecosystems. The aim of the plan is to maximise the long-term survival in the wild of a threatened species or ecosystem. Methods Either a single species or an area, habitat or ecosystem can be targeted by the recovery plan. One method of conserving a species is to conserve the habitat that the species is found in. In this process, there is no target species for conservation, but rather the habitat as a whole is protected and managed, often with a view to returning the habitat to a more natural state. In theory, this method of conservation can be beneficial because it allows for the entire ecosystem and the many species within to benefit from conservation, rather than just the single target species. The IUCN stated in 2016 that there is evidence that area-based approaches do not have enough focus on individual species to protect them sufficiently. By country or region Australia In Australia, the Minister for the Environment may make or adopt and implement recovery plans for threatened fauna, flora and ecosystems listed under the Commonwealth Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act), after consultation with the relevant minister in each state, the Threatened Species Scientific Committee, and members of the public. "Recovery plans should state what must be done to protect and restore important populations of threatened species and habitat, as well as how to manage and reduce threatening processes. Recovery plans achieve this aim by providing a planned and logical framework for key interest groups and responsible government agencies to coordinate their work to improve the plight of threatened species and/or ecological communities." Europe Since 2008, the European Commission has supported the development of Species Action Plans for selected species. The documents "are intended to be used as a tool for identifying and prioritising measures to restore the populations of these species across their range within the EU. They provide information about the status, ecology, threats and current conservation measures for each species and list the key actions that are required to improve their conservation status in Europe. Each Plan is the result of an extensive process of consultation with individual experts in Europe". United States In the US, the Endangered Species Act of 1973 requires that all species considered endangered must have a plan implemented for their recovery. The Fish and Wildlife Service and the National Oceanic and Atmospheric Administration (NOAA) National Marine Fisheries Service are responsible for administering the act. In 1983, a recovery plan for Aconitum noveboracense was approved, became the first plant taxon to have an implemented recovery plan. The recovery plan is a document which specifies what research and management actions are necessary to support recovery, but does not itself commit manpower or funds. Recovery plans are used in setting funding priorities and provide direction to local, regional, and state planning efforts. Recovery is when the threats to species survival are neutralized and the species will be able to survive in the wild. In the US, a recovery plan must contain at least: A description of what is needed to return the species to a healthy state; Criteria for what this healthy state would be, so that the species can be removed from the endangered list when it is achieved; and Estimates of how long the recovery will take and how much it will cost. Optionally, it may contain the following sections: Description of the species, its taxonomy, population structure and life history, including the distribution, food sources, reproduction and abundance; Threats - the main reasons why the species is now at risk of extinction; and Recovery strategy - details of how the species can be returned to a healthy state, including the goals, timeline, methods and criteria for delisting. Implementation Adaptive management When recovery plans are carried out well, they do not simply act as stop gaps to prevent extinction, but can restore species to a state of health so they are self-sustaining. There is evidence to suggest that the best plans are adaptive and dynamic, responding to changing conditions. However, adaptive management requires the system to be constantly monitored so that changes are identified. Surprisingly this is frequently not done, even for species that have already been red listed. The species must be monitored throughout the recovery period (and beyond) to ensure that the plan is working as intended. The framework for this monitoring should be planned before the start of the implementation, and the details included in the recovery plan. Information on how and when the data will be collected should be supplied. Endangered species definitions IUCN The IUCN has categories that it uses to classify species, which are widely used in conservation. These are: Extinct (EX) – there are no individuals remaining of that species at all Extinct in the wild (EW) – there are no individuals remaining of that species in the wild at all Critically Endangered (CR) – there is a very high risk that the species will soon go extinct in the wild, for example because there is only a very small population remaining Endangered (EN) – there is a high risk of the species soon becoming extinct in the wild Vulnerable (VU) – there is a high risk that the species will soon become endangered Near threatened (NT) – there is a risk that the species will become threatened in the near future Least concern (LC) – there is a low risk that the species will become threatened. This category is used for "widespread and abundant taxa" Data Deficient (DD) – there is not enough data on the species to be able to make a reliable assessment on the status of the species Not evaluated (NE) – the species has not yet been evaluated US The U.S. Fish and Wildlife Service has 17 categories of species status. These categories are used in the documents produced for the U.S. Endangered species act. The categories include: Endangered (E) for species "in danger of extinction throughout all or a significant portion of its range" Threatened (T) for species "likely to become endangered within the foreseeable future throughout all or a significant portion of its range" Candidate (C) for species currently under consideration Species endangered due to "similarity of appearance" (SAE) Species of concern (SC) for species that are considered "important to monitor" but have not been categorized as E, T or C Delisted species removed from the list due to species recovery or extinction See also Biodiversity Holocene extinction In situ conservation IUCN Red List National Oceanic and Atmospheric Administration References Further reading • Alagona, Peter S. 2020. After the Grizzly: Endangered Species and the Politics of Place in California. University of California Press, ISBN 9780520355545 • Greenwald, N., Ando, A., Butchart, S. et al. Conservation: The Endangered Species Act at 40. Nature 504, 369–370 (2013). https://doi.org/10.1038/504369a • Martin, Laura J. 2022. Wild by Design: The Rise of Ecological Restoration. Harvard University Press, ISBN 9780674979420 Example recovery plans Recovery Plan for the North Pacific Right Whale (Eubalaena japonica) (2013) Bonobo (Pan paniscus) Conservation Strategy 2012–2022 (IUCN) recov Ecological restoration Nature conservation in the United States United States Environmental Protection Agency
Endangered species recovery plan
[ "Chemistry", "Engineering", "Biology" ]
1,798
[ "Biota by conservation status", "Ecological restoration", "Endangered species", "Environmental engineering" ]
6,981,968
https://en.wikipedia.org/wiki/Manx%20Spirit
ManX Spirit is a clear spirit, 40% alcohol by volume which is distilled by Kella Distillers Ltd in a small distillery in Sulby, Isle of Man. It is produced by redistillation of existing Scottish whiskies, resulting in a clear and colourless product; as of 2012 is the only distilled spirit produced on the Isle of Man. As of 1997, the product sold 50,000 bottles per year, mostly on the Isle of Man and to the Far East. In 1997, a United Kingdom High Court case, brought by United Distillers and Allied Domecq, concluded that despite being based on whisky and tasting "like a good whisky", the redistillation process and lack of colour meant that the drink could not legally be sold in the United Kingdom while labeled as "whisky". ManX Spirit won 2 stars in the 2017 Great Taste Awards. It also won a silver medal in the 2019 International Wine and Spirits Competition. References External links Manx cuisine Distilled drinks
Manx Spirit
[ "Chemistry" ]
215
[ "Distillation", "Distilled drinks" ]
6,982,179
https://en.wikipedia.org/wiki/Polymer%20Interdisciplinary%20Research%20Centre
The Interdisciplinary Research Centre in Polymer Science and Technology is a consortium of research groups, formed in 1989 from the Universities of Durham, Leeds and Bradford, all of whom are involved research in polymer science and technology. The University of Sheffield joined in 2004. The Polymer IRC has complementary expertise in polymer chemistry, physics and processing with research programmes covering a wide range of multi-disciplinary polymer science and technology. Research programmes in the Polymer IRC are funded by grants from government bodies, in particular the Engineering and Physical Sciences Research Council and industry. From 2011 the Polymer IRC has been centred at the extensive polymer engineering laboratories at the University of Bradford, Polymer IRC with further interdisciplinary research in polymers with pharmaceuticals, and, since 2015, materials chemistry. From 2009, a substantial Science Bridges China programme in Advanced Materials for Healthcare with leading Chinese universities began, which has led to three joint international research laboratories and many early career researcher exchanges Science Bridges China. References Chemical research institutes
Polymer Interdisciplinary Research Centre
[ "Chemistry" ]
192
[ "Chemical research institutes", "Chemistry organization stubs" ]
6,982,305
https://en.wikipedia.org/wiki/Switchover
Switchover is the manual switch from one system to a redundant or standby computer server, system, or network upon the failure or abnormal termination of the previously active server, system, or network, or to perform system maintenance, such as installing patches, and upgrading software or hardware. Automatic switchover of a redundant system on an error condition, without human intervention, is called failover. Manual switchover on error would be used if automatic failover is not available, possibly because the overall system is too complex. See also Safety engineering Data integrity Fault-tolerance High-availability cluster (HA clusters) Business continuity planning References Engineering failures Fault-tolerant computer systems
Switchover
[ "Technology", "Engineering" ]
132
[ "Systems engineering", "Reliability engineering", "Technological failures", "Computer systems", "Engineering failures", "Fault-tolerant computer systems", "Civil engineering" ]
6,982,399
https://en.wikipedia.org/wiki/Bovine%20viral%20diarrhea
Bovine viral diarrhea (BVD), bovine viral diarrhoea (UK English) or mucosal disease, previously referred to as bovine virus diarrhea (BVD), is an economically significant disease of cattle that is found in the majority of countries throughout the world. Worldwide reviews of the economically assessed production losses and intervention programs (e.g. eradication programs, vaccination strategies and biosecurity measures) incurred by BVD infection have been published. The causative agent, bovine viral diarrhea virus (BVDV), is a member of the genus Pestivirus of the family Flaviviridae. BVD infection results in a wide variety of clinical signs, due to its immunosuppressive effects, as well as having a direct effect on respiratory disease and fertility. In addition, BVD infection of a susceptible dam during a certain period of gestation can result in the production of a persistently infected (PI) fetus. PI animals recognise intra-cellular BVD viral particles as ‘self’ and shed virus in large quantities throughout life; they represent the cornerstone of the success of BVD as a disease. Currently, it was shown in a worldwide review study that the PI prevalence at animal level ranged from low (≤0.8% Europe, North America, Australia), medium (>0.8% to 1.6% East Asia) to high (>1.6% West Asia). Countries that had failed to implement any BVDV control and/or eradication programmes (including vaccination) had the highest PI prevalence. Virus classification and structure BVDVs are members of the genus Pestivirus, belonging to the family Flaviviridae. Other members of this genus cause Border disease (sheep) and classical swine fever (pigs) which cause significant financial loss to the livestock industry. Pestiviruses are small, spherical, single-stranded, enveloped RNA viruses of 40 to 60 nm in diameter. The genome consists of a single, linear, positive-sense, single-stranded RNA molecule of approximately 12.3 kb. RNA synthesis is catalyzed by the BVDV RNA-dependent RNA polymerase (RdRp). This RdRp can undergo template strand switching allowing RNA-RNA copy choice recombination during elongative RNA synthesis. Two BVDV genotypes are recognised, based on the nucleotide sequence of the 5’untranslated (UTR) region; BVDV-1 and BVDV-2. BVDV-1 isolates have been grouped into 16 subtypes (a –p) and BVDV-2 has currently been grouped into 3 subtypes (a – c). BVDV strains can be further divided into distinct biotypes (cytopathic or non-cytopathic) according to their effects on tissue cell culture; cytopathic (cp) biotypes, formed via mutation of non-cytopathic (ncp) biotypes, induce apoptosis in cultured cells. Ncp viruses can induce persistent infection in cells and have an intact NS2/3 protein. In cp viruses the NS2/3 protein is either cleaved to NS2 and NS3 or there is a duplication of viral RNA containing an additional NS3 region. The majority of BVDV infections in the field are caused by the ncp biotype. Epidemiology BVD is considered one of the most significant infectious diseases in the livestock industry worldwide due to its high prevalence, persistence and clinical consequences. In Europe the prevalence of antibody positive animals in countries without systematic BVD control is between 60 and 80%. Prevalence has been determined in individual countries and tends to be positively associated with stocking density of cattle. BVDV-1 strains are predominant in most parts of the world, whereas BVDV-2 represents 50% of cases in North America. In Europe, BVDV-2 was first isolated in the UK in 2000 and currently represents up to 11% of BVD cases in Europe. Transmission of BVDV occurs both horizontally and vertically with both persistently and transiently infected animals excreting infectious virus. Virus is transmitted via direct contact, bodily secretions and contaminated fomites, with the virus being able to persist in the environment for more than two weeks. Persistently infected animals are the most important source of the virus, continuously excreting a viral load one thousand times that shed by acutely infected animals. Pathogenesis Acute, transient infection Following viral entry and contact with the mucosal lining of the mouth or nose, replication occurs in epithelial cells. BVDV replication has a predilection for the palatine tonsils, lymphoid tissues and epithelium of the oropharynx. Phagocytes take up BVDV or virus-infected cells and transport them to peripheral lymphoid tissues; the virus can also spread systemically through the bloodstream. Viraemia occurs 2–4 days after exposure and virus isolation from serum or leukocytes is generally possible between 3–10 days post infection. During systemic spread the virus is able to gain entry into most tissues with a preference for lymphoid tissues. Neutralising antibodies can be detected from 10 to 14 days post infection with titres continuing to increase slowly for 8–10 weeks. After 2–3 weeks, antibodies effectively neutralise viral particles, promote clearance of virus and prevent seeding of target organs. Intrauterine infections Fetal infection is of most consequence as this can result in the birth of a persistently infected neonate. The effects of fetal infection with BVDV are dependent upon the stage of gestation at which the dam suffers acute infection. BVDV infection of the dam prior to conception, and during the first 18 days of gestation, results in delayed conception and an increased calving to conception interval. Once the embryo is attached, infection from days 29–41 can result in embryonic infection and resultant embryonic death. Infection of the dam from approximately day 30 of gestation until day 120 can result in immunotolerance and the birth of calves persistently infected with the virus. BVDV infection between 80 and 150 days of gestation may be teratogenic, with the type of birth defect dependent upon the stage of fetal development at infection. Abortion may occur at any time during gestation. Infection after approximately day 120 can result in the birth of a normal fetus which is BVD antigen-negative and BVD antibody-positive. This occurs because the fetal immune system has developed, by this stage of gestation, and has the ability to recognise and fight off the invading virus, producing anti-BVD antibodies. Chronic infections BVD virus can be maintained as a chronic infection within some immunoprivileged sites following transient infection. These sites include ovarian follicles, testicular tissues, central nervous system and white blood cells. Cattle with chronic infections elicit a significant immune response, exhibited by extremely high antibody titres. Clinical signs BVDV infection has a wide manifestation of clinical signs including fertility issues, milk drop, pyrexia, diarrhea, and fetal infection. Occasionally, a severe acute form of BVD may occur. These outbreaks are characterized by thrombocytopenia with high morbidity and mortality. However, clinical signs are frequently mild and infection insidious, recognized only by BVDV's immunosuppressive effects perpetuating other circulating infectious diseases (particularly scours and pneumonias). PI animals Persistently infected animals did not have a competent immune system at the time of BVDV transplacental infection. The virus, therefore, entered the fetal cells and, during immune system development, was accepted as self. In PIs the virus remains present in a large number of the animal's body cells throughout its life and is continuously shed. PIs are often ill-thrifty and smaller than their peers, however, they can appear normal. PIs are more susceptible to disease, with only 20% of PIs surviving to two years of age. If a PI dam is able to reproduce they always give birth to PI calves. Mucosal disease The PI cattle that do survive ill-thrift are susceptible to mucosal disease. Mucosal disease only develops in PI animals and is invariably fatal. Disease results when a PI animal is superinfected with a cytopathic biotype arising from mutation of the non-cytopathic strain of BVDV already circulating in that animal. The cp BVDV spreads to the gastro-intestinal epithelium, and necrosis of keratinocytes results in erosion and ulceration. Fluid leaks from the epithelial surface of the gastro-intestinal tract causing diarrhoea and dehydration. In addition, bacterial infection of the damaged epithelium results in secondary septicaemia. Death occurs in the ensuing days or weeks. Diagnosis Various diagnostic tests are available for the detection of either active infection or evidence of historical infection. The method of diagnosis used also depends upon whether the vet is investigating at an individual or a herd level. Virus or antigen detection Antigen ELISA and rtPCR are currently the most frequently performed tests to detect virus or viral antigen. Individual testing of ear tissue tag samples or serum samples is performed. It is vital that repeat testing is performed on positive samples to distinguish between acute, transiently infected cattle and PIs. A second positive result, acquired at least three weeks after the primary result, indicates a PI animal. rtPCR can also be used on bulk tank milk (BTM) samples to detect any PI cows contributing to the tank. It is reported that the maximum number of contributing cows from which a PI can be detected is 300. BVD antibody detection Antibody (Ig) ELISAs are used to detect historical BVDV infection; these tests have been validated in serum, milk and bulk milk samples. Ig ELISAs do not diagnose active infection but detect the presence of antibodies produced by the animal in response to viral infection. Vaccination also induces an antibody response, which can result in false positive results, therefore it is important to know the vaccination status of the herd or individual when interpreting results. A standard test to assess whether virus has been circulating recently is to perform an Ig ELISA on blood from 5–10 young stock that have not been vaccinated, aged between 9 and 18 months. A positive result indicates exposure to BVDV, but also that any positive animals are very unlikely to be PI animals themselves. A positive result in a pregnant female indicates that she has previously been either vaccinated or infected with BVDV and could possibly be carrying a PI fetus, so antigen testing of the newborn is vital to rule this out. A negative antibody result, at the discretion of the responsible veterinarian, may require further confirmation that the animal is not in fact a PI. At a herd level, a positive Ig result suggests that BVD virus has been circulating or the herd is vaccinated. Negative results suggest that a PI is unlikely however this naïve herd is in danger of severe consequences should an infected animal be introduced. Antibodies from wild infection or vaccination persist for several years therefore Ig ELISA testing is more valuable when used as a surveillance tool in seronegative herds. Eradication and control The mainstay of eradication is the identification and removal of persistently infected animals. Re-infection is then prevented by vaccination and high levels of biosecurity, supported by continuing surveillance. PIs act as viral reservoirs and are the principal source of viral infection but transiently infected animals and contaminated fomites also play a significant role in transmission. Leading the way in BVD eradication, almost 20 years ago, were the Scandinavian countries. Despite different conditions at the start of the projects in terms of legal support, and regardless of initial prevalence of herds with PI animals, it took all countries approximately 10 years to reach their final stages. Once proven that BVD eradication could be achieved in a cost efficient way, a number of regional programmes followed in Europe, some of which have developed into national schemes. Vaccination is an essential part of both control and eradication. While BVD virus is still circulating within the national herd, breeding cattle are at risk of producing PI neonates and the economic consequences of BVD are still relevant. Once eradication has been achieved, unvaccinated animals will represent a naïve and susceptible herd. Infection from imported animals or contaminated fomites brought into the farm, or via transiently infected in-contacts will have devastating consequences. Vaccination Modern vaccination programmes aim not only to provide a high level of protection from clinical disease for the dam, but, crucially, to protect against viraemia and prevent the production of PIs. While the immune mechanisms involved are the same, the level of immune protection required for foetal protection is much higher than for prevention of clinical disease. While challenge studies indicate that killed, as well as live, vaccines prevent foetal infection under experimental conditions, the efficacy of vaccines under field conditions has been questioned. The birth of PI calves into vaccinated herds suggests that killed vaccines do not stand up to the challenge presented by the viral load excreted by a PI in the field. See also Animal viruses References Bovine Viral Diarrhoea Virus, expert reviewed and published by Wikivet at http://en.wikivet.net/Bovine_Viral_Diarrhoea_Virus, accessed 21/07/2011 External links New York State Cattle Health Assurance Program BVD Module Description of the entity on the Merck Veterinary Manual Animal viruses Bovine Viral Diarrhea Resource Page Specialist BVD site, Royal Veterinary College, London Diarrhea Bovine diseases Animal viral diseases Unaccepted virus taxa
Bovine viral diarrhea
[ "Biology" ]
2,867
[ "Biological hypotheses", "Unaccepted virus taxa", "Controversial taxa" ]
6,982,923
https://en.wikipedia.org/wiki/Adaptive%20mutation
Adaptive mutation, also called directed mutation or directed mutagenesis is a controversial evolutionary theory. It posits that mutations, or genetic changes, are much less random and more purposeful than traditional evolution, implying that organisms can respond to environmental stresses by directing mutations to certain genes or areas of the genome. There have been a wide variety of experiments trying to support (or disprove) the idea of adaptive mutation, at least in microorganisms. Definition The most widely accepted theory of evolution states that organisms are modified by natural selection where changes caused by mutations improve their chance of reproductive success. Adaptive mutation states that rather than mutations and evolution being random, they are in response to specific stresses. In other words, the mutations that occur are more beneficial and specific to the given stress, instead of random and not a response to anything in particular. The term stress refers to any change in the environment, such as temperature, nutrients, population size, etc. Tests with microorganisms have found that for adaptive mutation, more of the mutations observed after a given stress were more effective at dealing with the stress than chance alone would suggest is possible. This theory of adaptive mutation was first brought to academic attention in the 1980s by John Cairns. Recent studies Adaptive mutation is a controversial claim leading to a series of experiments designed to test the idea. Three major experiments are the SOS response, responses to starvation in Escherichia coli, and testing for revertants of a tryptophan auxotroph in Saccharomyces cerevisiae (yeast). Lactose starvation The E. coli strain FC40 has a high rate of mutation, and so is useful for studies, such as for adaptive mutation. Due to a frameshift mutation, a change in the sequence that causes the DNA to code for something different, FC40 is unable to process lactose. When placed in a lactose-rich medium, it has been found that 20% of the cells mutated from Lac- (could not process lactose) to Lac+, meaning they could now utilize the lactose in their environment. The responses to stress are not in current DNA, but the change is made during DNA replication through recombination and the replication process itself, meaning that the adaptive mutation occurs in the current bacteria and will be inherited by the next generations because the mutation becomes part of the genetic code in the bacteria. This is particularly obvious in a study by Cairns, which demonstrated that even after moving E. coli back to a medium with minimal levels of lactose, Lac+ mutants continued to be produced as a response to the previous environment. This would not be possible if adaptive mutation was not at work because natural selection would not favor this mutation in the new environment. Although there are many genes involved in adaptive mutation, RecG, a protein, was found to have an effect on adaptive mutation. By itself, RecG was found to not necessarily lead to a mutational phenotype. However, it was found to inhibit the appearance of revertants (cells that appeared normally, as opposed to those with the mutations being studied) in wild type cells. On the other hand, RecG mutants were key to the expression of RecA-dependent mutations, which were a major portion of study in the SOS response experiments, such as the ability to utilize lactose. Adaptive mutation was re-proposed in 1988 by John Cairns who was studying Escherichia coli that lacked the ability to metabolize lactose. He grew these bacteria in media in which lactose was the only source of energy. In doing so, he found that the rate at which the bacteria evolved the ability to metabolize lactose was many orders of magnitude higher than would be expected if the mutations were truly random. This inspired him to propose that the mutations that had occurred had been directed at those genes involved in lactose utilization. Later support for this hypothesis came from Susan Rosenberg, then at the University of Alberta, who found that an enzyme involved in DNA recombinational repair, recBCD, was necessary for the directed mutagenesis observed by Cairns and colleagues in 1989. The directed mutagenesis hypothesis was challenged in 2002, by work showing that the phenomenon was due to general hypermutability due to selected gene amplification, followed by natural selection, and was thus a standard Darwinian process. Later research from 2007 however, concluded that amplification could not account for the adaptive mutation and that "mutants that appear during the first few days of lactose selection are true revertants that arise in a single step". SOS response This experiment is different from the others in one small way: this experiment is concerned with the pathways leading to an adaptive mutation while the others tested the changing environment microorganisms were exposed to. The SOS response in E. coli is a response to DNA damage that must be repaired. The normal cell cycle is put on hold and mutagenesis may begin. This means that mutations will occur to try to fix the damage. This hypermutation, or increased rate of change, response has to have some regulatory process, and some key molecules in this process are RecA, and LexA. These are proteins and act as stoplights for this and other processes. They also appear to be the main contributors to adaptive mutation in E. coli. Changes in presence of one or the other was shown to affect the SOS response, which in turn affected how the cells were able to process lactose, which should not be confused with the lactose starvation experiment. The key point to understand here is that LexA and RecA both were required for adaptive mutation to occur, and without the SOS response adaptive mutation would not be possible. Yeast von Borstel, in the 1970s, conducted experiments similar to the Lactose Starvation experiment with yeast, specifically Saccharomyces cerevisiae. He tested for tryptophan auxotroph revertants. A tryptophan auxotroph cannot make tryptophan for itself, but wild-type cells can and so a revertant will revert to the normal state of being able to produce tryptophan. He found that when yeast colonies were moved from a tryptophan-rich medium to a minimal one, revertants continued to appear for several days. The degree to which revertants were observed in yeast was not as high as with bacteria. Other scientists have conducted similar experiments, such as Hall who tested histidine revertants, or Steele and Jinks-Robertson who tested lysine. These experiments demonstrate how recombination and DNA replication are necessary for adaptive mutation. However, in lysine-tested cells, recombination continued to occur even without selection for it. Steele and Jinks-Robertson concluded that recombination occurred in all circumstances, adaptive or otherwise, while mutations were present only when they were beneficial and adaptive. Although the production of mutations during selection was not as vigorous as observed with bacteria, these studies are convincing. As mentioned above, a subsequent study adds even more weight to the results with lys2. Steele and Jinks-Robertson found that LYS prototrophs due to interchromosomal recombination events also continue to arise in nondividing cells, but in this case, the production of recombinants continued whether there was selection for them or not. Thus, mutation occurred in stationary phase only when it was adaptive, but recombination occurred whether it was adaptive or not. Delayed appearance of mutants has also been reported for Candida albicans. With long exposure to sublethal concentrations of heavy metals, colonies of resistant cells began to appear after 5–10 days and continued to appear for 1–2 weeks thereafter. These resistances could have resulted from gene amplification, although the phenotypes were stable during a short period of nonselective growth. However, revertants of two auxotrophies also appeared with similar kinetics. None of these events in Candida albicans have, as yet, been shown to be specific to the selection imposed. References Mutation Non-Darwinian evolution
Adaptive mutation
[ "Biology" ]
1,675
[ "Non-Darwinian evolution", "Biology theories" ]
6,983,586
https://en.wikipedia.org/wiki/Meitner%E2%80%93Hupfeld%20effect
The Meitner–Hupfeld effect, named after Lise Meitner and Hans-Hermann Hupfeld, is an anomalously large scattering of gamma rays by heavy elements. The effect was later explained by a broad theory from which evolved the Standard Model, a theory for explaining the structure of the atomic nucleus. The anomalous gamma-ray behavior was eventually ascribed to electron–positron pair production and annihilation. Although Professor Meitner was recognized for her work, Dr. Hupfeld is usually ignored, and little or no account of his life exists. See also Pair production Electron-positron annihilation References Standard Model History of physics
Meitner–Hupfeld effect
[ "Physics" ]
138
[ "Standard Model", "Particle physics stubs", "Particle physics" ]
6,984,314
https://en.wikipedia.org/wiki/Obturator%20ring
An obturator ring was a type of piston ring used in the early rotary engines of some World War I fighter aircraft for improved sealing in the presence of cylinder distortion. Purpose The cylinders of rotary aircraft engines (engines with the crankshaft fixed to the airframe and rotating cylinders) suffered from uneven cylinder cooling as the side facing the direction of rotation received more cooling air which lead to thermal distortion. To keep weight down the cylinders on rotary engines had very thin-walls (1.5 mm) and some had no cylinder liners. On engine types without cylinder liners, obturator rings, made of bronze in the early Gnome engines, were fitted as these were soft enough to not damage cylinder walls and could flex to the shape of the cylinder. In operation wear on the rings was considerable. Engines needed to be overhauled about every 20 hours. The reliability of Gnome engines license-built by The British Gnome and Le Rhone Engine Co. was improved with an overhaul life of about 80 hours being achieved, mainly as a result using a special tool to roll the 'L' section obturator rings. Clerget rotary aircraft engines also used obturator rings which were prone to overheating and seizure. Le Rhône and Bentley BR1/BR2 rotary engines used cylinder liners and were sealed using conventional piston rings rather than obturator rings. See also Piston ring References External links An 'L' section obturator ring is shown in Patent US 1378109A - "Obturator ring". Pistons Aerospace engineering
Obturator ring
[ "Engineering" ]
312
[ "Aerospace engineering" ]
6,984,635
https://en.wikipedia.org/wiki/Seed%20predation
Seed predation, often referred to as granivory, is a type of plant-animal interaction in which granivores (seed predators) feed on the seeds of plants as a main or exclusive food source, in many cases leaving the seeds damaged and not viable. Granivores are found across many families of vertebrates (especially mammals and birds) as well as invertebrates (mainly insects); thus, seed predation occurs in virtually all terrestrial ecosystems. Seed predation is commonly divided into two distinctive temporal categories, pre-dispersal and post-dispersal predation, which affect the fitness of the parental plant and the dispersed offspring (the seed), respectively. Mitigating pre- and post-dispersal predation may involve different strategies. To counter seed predation, plants have evolved both physical defenses (e.g., shape and toughness of the seed coat) and chemical defenses (secondary compounds such as tannins and alkaloids). However, as plants have evolved seed defenses, seed predators have adapted to plant defenses (e.g., ability to detoxify chemical compounds). Thus, many interesting examples of coevolution arise from this dynamic relationship. Seeds and their defenses Plant seeds are important sources of nutrition for animals across most ecosystems. Seeds contain food storage organs (e.g., endosperm) that provide nutrients to the developing plant embryo (cotyledon). This makes seeds an attractive food source for animals because they are a highly concentrated and localized nutrient source in relation to other plant parts. Seeds of many plants have evolved a variety of defenses to deter predation. Seeds are often contained inside protective structures or fruit pulp that encapsulate seeds until they are ripe. Other physical defenses include spines, hairs, fibrous seed coats and hard endosperm. Seeds, especially in arid areas, may have a mucilaginous seed coat that can glue soil to seed hiding it from granivores. Some seeds have evolved strong anti-herbivore chemical compounds. In contrast to physical defenses, chemical seed defenses deter consumption using chemicals that are toxic or distasteful to granivores or that inhibit the digestibility of the seed. These chemicals include toxic non-protein amino acids, cyanogenic glycosides, protease and amylase inhibitors, and phytohemaglutinins. Plants may face trade-offs between allocation toward defenses and the size and number of seeds produced. Plants may reduce the severity of seed predation by making seeds spatially or temporally scarce to granivores. Seed dispersal away from the parent plant is hypothesized to reduce the severity of seed predation. Seed masting is an example of how plant populations are able to temporally regulate the severity of seed predation. Masting refers to a concerted abundance of seed production followed by a period of paucity. This strategy has the potential to regulate the size of the population of seed predators. Seed predation vs. seed dispersal Adaptations to defend seeds against predation can impact seeds' ability to germinate and disperse. Thus anti-predator adaptations often occur in a suite of adaptations for a particular seed life history. For example, chili plants selectively deter mammal seed predators and fungi using capsaicin, which does not deter bird seed dispersers because bird taste receptors do not bind with capsaicin. Chili seeds in turn have higher survival if they pass through a bird's stomach than if they fall to the ground. Pre- and post-dispersal Seed predation can occur both before and after seed dispersal. Pre-dispersal Pre-dispersal seed predation takes place when seeds are removed from the parent plant before dispersal, and it has been most often reported in invertebrates, birds, and in granivorous rodents that clip fruits directly from trees and herbaceous plants. Post-dispersal seed predation arises once seeds have been released from the parent plant. Birds, rodents, and ants are known to be among the most pervasive postdispersal seed predators. Furthermore, postdispersal seed predation can take place at two contrasting stages: predation on the "seed rain" and predation on the "seed bank". Whereas predation on the seed rain occurs when animals prey on released seeds usually flush with the ground surface, predation on the seed bank takes place after seeds have been incorporated deeply into the soil. Nevertheless, there are important vertebrate pre-dispersal predators, especially birds and small mammals. Post-dispersal Post-dispersal seed predation is extremely common in virtually all ecosystems. Given the heterogeneity in both resource type (seeds from different species), quality (seeds of different ages and/or different status of integrity or decomposition) and location (seeds are scattered and hidden in the environment), most post-dispersal predators have generalist habits. These predators belong to a diverse array of animals, such as ants, beetles, crabs, fish, rodents and birds. The assemblage of post-dispersal seed predators varies considerably among ecosystems. A dispersed seed is the first independent life stage of a plant, thus post-dispersal seed predation is the first potential mortality event and one of the first biotic interactions in a plant's life cycle. Differences Both pre- and post-dispersal seed predation are common. Pre-dispersal predators differ from post-dispersal predators in most often being specialists, adapted to clustered resources (on the plant). They use specific cues like plant chemistry (volatile compounds), color, and size to locate seeds, and their short life cycles often match the production of seeds by the host plant. Insect groups containing many pre-dispersal seed predators are Coleoptera, Hemiptera, Hymenoptera and Lepidoptera. Effects on plant demography The complex relationship between seed predation and plant demography is an important topic of plant-animal interactive studies. Plant population structure and size over time is closely associated with the effectiveness at which seed predators locate, consume, and disperse seeds. In many cases this relationship depends on the type of seed predator (specialist vs. generalist) or the particular habitat in which the interaction is taking place. The role of seed predation on plant demography may be either detrimental or in particular cases actually beneficial to plant populations. The Janzen-Connell model concerns how seed density and survival respond to distance from the parent tree and differential rates of seed predation. Seed density is hypothesized to decrease as distance from the parent tree increases. Where seeds are most abundant under the parent tree, seed predation is predicted to be at its highest. As distance from the parent tree increases, seed abundance and thus seed predation are predicted to decrease as seed survival increases. The degree to which seed predation influences plant populations may vary by whether a plant species is safe site limited or seed limited. If a population is safe site limited it is likely that seed predation will have little impact to the success of the population. In safe site limited populations increased seed abundance does not translate into increased seedling recruitment. However, if a population is seed limited, seed predation has a better chance of negatively affecting the plant population by decreasing seedling recruitment. Maron and Simms found both safe site limited and seed limited populations depending on the habitat in which the seed predation was taking place. In dune habitats seed predators (deer mice) were limiting seedling recruitment in the population, thus negatively affecting the population. However, in grassland habitat the seed predator had little effect on the plant population because it was safe site limited. In many cases seed predators support plant populations by dispersing seeds away from the parent plant, in effect supporting gene flow between populations. Other seed predators collect seeds and then store or cache them for later consumption. In the case that the seed predator is unable to locate the buried or hidden seed there is a chance that it will later germinate and grow, supporting the species dispersal. Generalist (vertebrate) seed predators may also aid the plant in other indirect ways, for instance by inducing top-down control on host-specific seed predators (termed "intra-guild predation"), and as such negating Janzen-Connell type effects and so benefiting the plant in competition with other plant species. See also Consumer-resource systems Egg predation Harvester ant Herbivory Seed dispersal References Further reading Herbivory Plant reproduction Animals by eating behaviors
Seed predation
[ "Biology" ]
1,716
[ "Behavior", "Plant reproduction", "Plants", "Reproduction", "Animals by eating behaviors", "Eating behaviors", "Herbivory", "Ethology" ]
6,985,504
https://en.wikipedia.org/wiki/Triangular%20fibrocartilage
The triangular fibrocartilage complex (TFCC) is formed by the triangular fibrocartilage discus (TFC), the radioulnar ligaments (RULs) and the ulnocarpal ligaments (UCLs). Structure Triangular fibrocartilage disc The triangular fibrocartilage disc (TFC) is an articular discus that lies on the pole of the distal ulna. It has a triangular shape and a biconcave body; the periphery is thicker than its center. The central portion of the TFC is thin and consists of chondroid fibrocartilage; this type of tissue is often seen in structures that can bear compressive loads. This central area is often so thin that it is translucent and in some cases it is even absent. The peripheral portion of the TFC is well vascularized, while the central portion has no blood supply. This discus is attached by thick tissue to the base of the ulnar styloid and by thinner tissue to the edge of the radius just proximal to the radiocarpal articular surface. Radioulnar ligaments The radioulnar ligaments (RULs) are the principal stabilizers of the distal radioulnar joint (DRUJ). There are two RULs: the palmar and dorsal radioulnar ligaments. These ligaments arise from the distal radius medial border and insert on the ulna at two separate and distinct sites: the ulna styloid and the fovea (a groove that separates the ulnar styloid from the ulnar head). Each ligament consists of a superficial component and a deep component. The superficial components insert directly onto the ulna styloid. The deep components insert more anterior, into the fovea adjacent to the articular surface of the dome of the distal ulna. The ligaments are composed of longitudinally oriented lamellar collagen to resist tensile loads and have a rich vascular supply to allow healing. Ulnocarpal ligaments The ulnocarpal ligaments (UCLs) consist of the ulnolunate and the ulnotriquetral ligaments. They originate from the ulnar styloid and insert into the carpal bones of the wrist: the ulnolunate ligament inserts into the lunate bone and the ulnotriquetral ligament into the triquetrum bone. These ligaments prevent dorsal migration of the distal ulna. They are more taut during supination, because in supination ulnar styloid moves away from the carpal bones volar side. Function The primary functions of the TFCC: To cover the ulna head by extending the articular surface of the distal radius. Load transmission across the ulnocarpal joint and partially load absorbing Allows forearm rotation by giving a strong but flexible connection between the distal radius and ulna. It also supports the ulnar portion of the carpus. Load transmission The TFCC is important in load transmission across the ulnar aspect of the wrist. The TFC transmits and absorbs compressive forces. The ulnar variance influences the amount of load that is transmitted through the distal ulna. The load transmission is directly proportional to this ulnar variance. In neutral ulnar variance, approximately 20 percent of the load is transmitted. With negative ulnar variance, the load across the TFC is decreased. This occurs during supination, because the radius moves distally on the ulna and creates a negative ulnar variance. With positive ulnar variance it is reversed. The load that is transmitted across the TFC is then increased. This positive ulnar variance occurs during pronation. Rotation The TFCC is a major stabilizer of the DRUJ. To control the forearm rotation the DRUJ acts in concert with the proximal radioulnar joint. The connection between the distal radius and the distal ulna, maintain the congruency of the DRUJ. This attachment is mainly created by the RULs of the TFCC. These ligaments support the joint through its arc of rotation. The role of the TFCC in supination and in pronation is a matter of dispute. Some authors (Schuind et al.) concluded that the dorsal fibers of the TFCC tighten in pronation, and the palmar fibers in supination. These conclusions are opposite of those published by Af Ekenstam and Hagert. Both parties are in fact right, as the RULs consists of two ligaments each made of another two components: the superficial and the deep ligaments. During supination, the superficial palmar and the deep dorsal ligaments are tightened, preventing palmar translation of the ulna. In pronation, this is reversed: the superficial dorsal and the deep palmar ligaments are tightened and prevent dorsal translation of the ulna. Clinical significance The TFCC has a substantial risk for injury and degeneration because of its anatomic complexity and multiple functions. Application of an extension-pronation force to an axial-load wrist, such as in a fall on an outstretched hand, causes most of the traumatic injuries of the TFCC. Dorsal rotation injury, such as when a drill binds and rotates the wrist instead of the bit, can also cause traumatic injuries. Injury may also occur from a distraction force applied to the volar forearm or wrist. Finally, tears of the TFCC are frequently found by patients with distal radius fractures. Perforations and defects in the TFCC are not all traumatic. There is an age related correlation with lesions in the TFCC, but many of these defects are asymptomatic. These lesions common occur by patients with positive ulnar variance. Chronic and excessive loading through the ulnocarpal joint, causes degenerative TFCC tears. These tears are a component of ulnar impaction syndrome. Even though natural degeneration of the ulnocarpal joint is very common, it is important to recognize. In cadavaric examinations, 30% to 70% of the cases had TFCC perforations and chondromalacia of the ulnar head, lunate, and triquetrum. Cases with ulnar-negative variance had fewer degenerative changes. Palmer classification of TFCC lesions The Palmer classification is the most recognized scheme; it divides TFCC lesions into these two categories: traumatic and degenerative. This classification provides an anatomic description of tears, it does not guide treatment or indicate prognosis. Class 1 – Traumatic Class 1A. Central perforation Class 1B. Ulnar avulsion (with or without styloid fracture) Class 1C. Distal avulsion (from carpus) Class 1D. Radial avulsion (with or without sigmoid notch fracture) Class 2 – Degenerative (ulnar impaction syndrome) Class 2A. TFCC wear Class 2B. TFCC wear with lunate and/or ulnar head chondromalacia Class 2C. TFCC perforation with lunate and/or ulnar head chondromalacia Class 2D. TFCC perforation with lunate and/or ulnar head chondromalacia, and with lunotriquetral ligament perforation Class 2E. TFCC perforation with lunate and/or ulnar head chondromalacia, with lunotriquetral ligament perforation, and with ulnocarpal arthritis Symptoms Patients with a TFCC injury usually experience pain or discomfort located at the ulnar side of the wrist, often just above the ulnar styloid. However, there are also some patients who report diffuse pain throughout the entire wrist. Rest can reduce pain and activity can make it worse, especially with rotating movements (supination and pronation) of the wrist or movements of the hand sideways in ulnar direction. Other symptoms patients with a TFCC injury frequently mention are: swelling, loss of grip strength, instability, and grinding or clicking sounds (crepitus) that can occur during activity of the wrist. Diagnosis Anamnesis Injuries to the TFCC may be preceded by a fall on a pronated outstretched arm; a rotational injury to the forearm; an axial load trauma to the wrist; or a distraction injury of the wrist in ulnar direction. However, not all patients can recall that a preceding trauma occurred. Physical examination Palpation: The best place to palpate the TFCC is between the extensor carpi ulnaris (ECU) and the flexor carpi ulnaris (FCU), distal to the ulnar styloid and proximal to the pisiform bone. Tenderness in this area may be consistent with a TFCC lesion. Piano key sign: Dorsal DRUJ instability can cause a protruding ulna head, which can be pressed down. When you release the pressure, it will spring back in position again, just like a piano key. DRUJ stress test: With this provocation maneuver, the wrist is held in pronated or supinated position, while the physician attempts to manipulate the distal ulna in dorsal and volar direction. Painful laxity indicates DRUJ instability and suggests RUL pathology. Ulnar grind test: The forearm is fixated and the wrist is held in dorsiflexion. The physician then applies axial load, while he rotates and deviates the wrist in ulnar direction. Pain and crepitations during this provocation maneuver suggest DRUJ instability or arthritis. Imaging X-ray: X-rays of the wrist are made in two directions: posterior-anterior (PA) and lateral. Radiographs are useful to diagnose or rule out possible bone fractures, a positive ulnar variance or osteoarthritis. The TFCC is not visible on an X-ray, regardless of its condition. MRI: is, together with the findings of a careful physical examination, a helpful diagnostic tool to assess the condition of the TFCC. Nevertheless, the incidence of false-positive and false-negative MRI results is high. Arthrography: a dye is injected into the wrist joint. If there is a TFCC lesion the dye will leak from one joint compartment to another. Wrist arthroscopy: is an invasive diagnostic tool, but it remains to this day the most accurate way to identify TFCC lesions. Note: Imaging techniques can only be relevant together with the clinical findings of a carefully performed physical examination. Other than a TFCC injury, there are many possible causes for ulnar-sided wrist pain. Differential diagnosis of TFCC injuries Tendinopathy of the ECU Ulnar styloid fracture Distal radius fracture DRUJ arthritis Pisiform bone fractures Hamate bone fractures Carpal instability Midcarpal instability Hypothenar hammer syndrome (ulnar artery thrombosis) Treatment The initial treatment for both traumatic and degenerative TFCC lesions, with a stable DRUJ, is conservative (nonsurgical) therapy. Patients may be advised to wear a temporary splint or cast to immobilize the wrist and forearm for four to six weeks. The immobilization allows scar tissue to develop which can help heal the TFCC. In addition, oral NSAIDs and corticosteroid joint injections can be prescribed for pain relief. Physiotherapy and occupational therapy can help patients recover after immobilization or surgery. Wrist support straps used in sports can also be used in mild cases to compress and minimize movement of the area. Indications for acute TFCC surgery are: a clearly unstable DRUJ, or the existence of additional unstable or displaced fractures. TFCC surgery is also indicated when conservative treatment proves insufficient in about 8–12 weeks. Fractures of the radius bone are often associated by TFCC damage. If the fracture is treated surgically it is recommended to evaluate and if necessary repair the TFCC as well. Closed fractures (where the skin is still intact) of the radius bone are treated non-surgically with cast; the immobilization can also help heal the TFCC. Surgical Arthroscopic debridement of TFC discus tissue The central part of the TFC has no blood supply and therefore has no healing capacity. When a tear occurs in this area of the TFC, it typically creates an unstable flap of tissue that is likely to catch on other joint surfaces. Removing the damaged tissue (debridement) is then indicated. Arthroscopic debridement as a treatment for degenerative TFC tears associated with positive ulnar variance, unfortunately, show poor results. Arthroscopic repair of TFCC ligaments Suturing TFCC ligaments can sometimes be performed arthroscopically. But only if there is no serious damage to the ligaments or other surrounding structures. Even after a short period of time torn ligaments tend to retract and therefore lose length. Retracted ligament ends are impossible to suture together again and a reconstruction may be necessary. Open surgical repair of the TFCC Open surgery is usually required for degenerative or more complex TFCC injuries, or if additional damage to the wrist or forearm caused instability or displacement. It is a more invasive surgical technique compared to arthroscopic treatment, but the surgeon has better visibility and access to the TFCC. Options for open surgery Suturing of the RULs. This is, just like arthroscopic suturing of these ligaments, only possible when the damage is not too serious and if both ends of the ruptured ligament are not yet retracted. Anatomic reconstruction of the RULs using a tendon graft (e.g., the palmaris longus). The tendon graft is tunneled through drilled holes in the ulnar and radius bones. This procedure is indicated for DRUJ instability caused by an irreparable TFCC. Capsular or extensor retinaculum plication. This surgical technique aims to improve DRUJ stability by shortening the joint capsule or the extensor retinaculum. It is mostly used for minor DRUJ instability and is less invasive compared to a complete RUL reconstruction. Shortening of the ulnar bone. Patients with a positive ulnar variance are more susceptible to TFCC damage. Shortening the ulnar bone may help relieve the excess pressure to the TFCC and prevent further degeneration. References External links  — "Triangular Fibrocartilage Complex Injuries" Anatomy
Triangular fibrocartilage
[ "Biology" ]
3,007
[ "Anatomy" ]
6,985,603
https://en.wikipedia.org/wiki/Hairy%20Head
The Hairy Head mansion (昴宿, pinyin: Mǎo Xiù) is one of the Twenty-eight mansions of the Chinese constellations. It is one of the western mansions of the White Tiger. This mansion corresponds to the Pleiades in English. Asterisms Chinese constellations
Hairy Head
[ "Astronomy" ]
59
[ "Chinese constellations", "Constellations" ]
6,986,613
https://en.wikipedia.org/wiki/International%20Astronautical%20Congress
The International Astronautical Congress (IAC) is an annual meeting of the actors in the discipline of space science. It is hosted by one of the national society members of the International Astronautical Federation (IAF), with the support of the International Academy of Astronautics (IAA) and the International Institute of Space Law (IISL). It consists of plenary sessions, lectures and meetings. The IAC is attended by the agency heads and senior executives of the world's space agencies. As the Second World War came to an end, the United States and the Soviet Union held different and competing political worldviews. As the Cold War began to take shape, communication between the two countries became less frequent. Both countries turned their focus to achieving military superiority over the other. Six years after the Iron Curtain fell, the IAF was formed by scientists from all over Europe in the field of space research in order to collaborate once more. During the years of the Space Race, the IAF was one of the few forums where members of both East and West Europe could meet during the annual IAC. Founding Organizations Argentina: Sociedad Argentina Interplanetaria (Argentine Interplanetary Society) Austria: Österreichische Gesellschaft für Weltraumforschung (Austrian Society for Space Research) France: Groupement Astronautique Français (French Astronautic Group) Germany: Gesellschaft für Weltraumforschung Stuttgart (Society for Space Research, Stuttgart), Gesellschaft für Weltraumforschung Hamburg (Society for Space Research Hamburg) Italy: Associazione Italiana Razzi (Italian Rocket Association) Spain: Asociación Española de Astronáutica (Spanish Astronautical Association) Sweden: Svenska Interplanetariska Sällskapet (Swedish Interplanetary Society) Switzerland: Schweizerische Astronautische Arbeitsgemeinschaft (Swiss Astronautical Association) United Kingdom: British Interplanetary Society United States: American Rocket Society, Detroit Rocket Society, Pacific Rocket Society, Reaction Research Society International Astronautical Federation Governance The International Astronautical Federation is a non-profit non-governmental organization created in 1951. Under French law, the IAF is defined as a federation of member organizations where a General Assembly is responsible for making decisions. IAF General Assembly The IAF General Assembly is in charge of governing the Federation. Composed of delegates from every member organization, the assembly is responsible for voting to approve all major decisions regarding the Federation's rules and regulations as well as the acceptance of new member organizations. The General Assembly meets during the International Astronautical Congress. IAF Bureau The IAF Bureau sets the agenda of the IAF General Assembly, including: review of new member candidates; supervision of IAF activities; and supervision of IAF accounts. It is made up of: The IAF President The Incoming IAF President The IAF Honorary Ambassador 12 IAF Vice-Presidents The IAF Executive Director The IAF General Counsel The IAF Incoming General Counsel The IAF Honorary Secretary The President of the International Academy of Astronautics (IAA) The President of the International Institute of Space Law (IISL) Special Advisor to the President IAF Secretariat This branch is in charge of running the administration of the Federation. Locations of past and future International Astronautical Congresses (IAC) International Astronautical Congresses are held in the late summer or fall months. In 2002 and 2012, the World Space Congress combined the IAC and COSPAR Scientific Assembly. The 2020 IAC was held virtually due to the global COVID pandemic. See also List of astronomical societies International Astronautical Federation International Institute of Space Law International Academy of Astronautics Space Generation Advisory Council References External links IAF IAC 2012 IAC 2013 IAC 2013 IAC 2014 IAC 2015 IAC 2017 Astronomy organizations Space advocacy organizations
International Astronautical Congress
[ "Astronomy" ]
778
[ "Space advocacy organizations", "Astronomy organizations" ]
6,986,787
https://en.wikipedia.org/wiki/Overspeed
Overspeed is a condition in which an engine is allowed or forced to turn beyond its design limit. The consequences of running an engine too fast vary by engine type and model and depend upon several factors, the most important of which are the duration of the overspeed and the speed attained. With some engines, a momentary overspeed can result in greatly reduced engine life or catastrophic failure. The speed of an engine is typically measured in revolutions per minute (rpm). Examples of overspeed In a propeller aircraft, an overspeed will occur if the propeller, usually connected directly to the engine, is forced to turn too fast by high-speed airflow while the aircraft is in a dive, moves to a flat blade pitch in cruising flight due to a governor failure or feathering failure, or becomes decoupled from the engine. In a jet aircraft, an overspeed results when the axial compressor exceeds its maximal operating rotational speed. This often leads to the mechanical failure of turbine blades, flameout and destruction of the engine. In a ground vehicle, an engine can be forced to turn too quickly by changing to an inappropriately low gear. Most unregulated engines will overspeed if power is applied with no or little load. In the event of diesel engine runaway (caused by excessive intake of combustibles), a diesel engine will overspeed if the condition is not quickly rectified. An example is a diesel engine powering equipment at an oil well head. Suppose the operators hit a pocket of natural gas. In that case, it will come to the surface and the engine will take in the flammable gas and rapidly increase speed until the engine is destroyed, unless the air intake is shut off, starving the engine of fuel and oxygen. Overspeed protection Sometimes a regulator or governor is fitted to make engine overspeed impossible or less likely. For example: Many steam engines use a centrifugal governor, which closes a throttle at high rpm to restrict steam flow as engine speed increases. In motor vehicles, automatic transmissions will change gear to prevent the engine from turning too quickly. Additionally, almost all modern vehicles are fitted with an electronic rev limiter device that will cut fuel supply or sparks to the engine to prevent overspeed. Some aircraft have constant-speed units that automatically change propeller pitch to keep the engine running at the optimal speed. Large diesel engines are sometimes fitted with a secondary protection device that actuates if the governor fails. This consists of a flap valve in the air intake. If the engine overspeeds, the airflow through the intake will rise to an abnormal level. This causes the flap valve to snap shut, starving the engine of air and shutting it down. Different overspeed occurrences and prevention Internal combustion engines An excerpt presented by the San Francisco Maritime National Park Association illustrates the types of overspeed systems with governor and engine control. Overspeed governors are either centrifugal or hydraulic. Centrifugal governors depend on the revolving force created by its own weight. Hydraulic governors use the centrifugal force but drive a medium to accomplish the same task. The overspeed governor is implemented on most marine diesel engines. The governor is a safety measure that acts when the engine is approaching overspeed and will trip the engine off if the regulator governor fails. It trips off the engine by cutting off fuel injection by having the centrifugal force act on levers linked to the governor collar. Turbines Overspeeds for power plant turbines can be catastrophic, resulting in failure due to the turbines' shafts and blades being off balance and potentially throwing their blades and other metal parts at very high speeds. Different safeguards exist, which include a mechanical and electrical protection system. Mechanical overspeed protection is in the form of sensors. The system relies on the centripetal force of the shaft, a spring, and a weight. At the designed point of overspeed, the balance point of the weight is shifted, causing the lever to release a valve that makes the trip oil header to lose pressure due to draining. This loss of oil affects the pressure, and moves a trip mechanism to then trip the system off. An electrical overspeed detection system involves a gear with teeth and probes. These probes detect how fast the teeth are moving, and if they are moving beyond the designated rpm, it relays that to the logic solver (overspeed detection). The logic solver trips the system by sending the overspeed to the trip relay, which is connected to a solenoid-operated valve. Mechanical vs. electrical governors on turbines In turbines and many other mechanical devices used for power generation, it is critical that the response times for overspeed prevention systems be as precise as possible. If the response is off by even a fraction of a second, it can lead to turbines and its driven load (i.e. compressor, generator, pump, etc..) suffering catastrophic damage, and can put people at risk. Mechanical Mechanical overspeed systems on turbines rely on an equilibrium between the centripetal force of the rotating shaft imparted on a weight attached to the end of a turbine blade. At the specified trip point, this weight makes physical contact with a lever that releases the trip oil header, which directly moves a trip bolt and/or a hydraulic circuit to activate stop valves to close. Because the contact with the lever occurs over a relatively limited angle, there is a maximum trip response time of 15 ms (i.e. 0.015 sec). The issue with these devices has less to do with response time as it does with response latency and variability in the trip point due to systems sticking. Some systems add two trip bolts for redundancy, which enables response latency to be reduced by half. Electrical Electrical overspeed systems on turbines rely on a multitude of probes that sense speed through measuring the passages of the teeth of a spur gear. Using a digital logic solver, the overspeed system determines the propeller shaft rpm given the ratio of the gear to the shaft. If the shaft rpm is too high, it outputs a trip command which de-energizes a trip relay. Overspeed response varies from system to system, so it is key to check the original equipment manufacturer's specification to set the Overspeed trip time accordingly. Typically, unless specified otherwise, the response time to change the output relay will be 40 ms. This time includes the time required for the probes to detect speed, compare it to an overspeed set-point, calculate results, and finally output the trip command. Overview of overspeed detection system When configuring, testing, and running any overspeed systems on turbines or diesel engines, one factor considered is timing. This is because the response to overspeed is usually too fast for people to notice. The responsibility of calibrating the correct overspeed response for a specific system falls on the manufacturer. However, variability is always present, and it is important for the owner/operator to understand the system in the event of maintenance, replacement, or retrofitting of outdated or worn out parts. After overspeed has occurred, it is essential to check all machinery parts for stress. The first place to start for impulse turbines is the rotor. At the rotor, there are balance holes that equalise the pressure difference between turbines, and if warped, would require the replacement of the entire rotor. See also Airlines PNG Flight 1600 Overclocking References Engines Engine problems Mechanisms (engineering)
Overspeed
[ "Physics", "Technology", "Engineering" ]
1,513
[ "Machines", "Engines", "Engine problems", "Physical systems", "Mechanical engineering", "Mechanisms (engineering)" ]
6,987,160
https://en.wikipedia.org/wiki/Discrete%20optimized%20protein%20energy
DOPE, or Discrete Optimized Protein Energy, is a statistical potential used to assess homology models in protein structure prediction. DOPE is based on an improved reference state that corresponds to noninteracting atoms in a homogeneous sphere with the radius dependent on a sample native structure; it thus accounts for the finite and spherical shape of the native structures. It is implemented in the popular homology modeling program MODELLER and used to assess the energy of the protein model generated through many iterations by MODELLER, which produces homology models by the satisfaction of spatial restraints. The models returning the minimum molpdfs can be chosen as best probable structures and can be further used for evaluating with the DOPE score. Like the current version of the MODELLER software, DOPE is implemented in Python and is run within the MODELLER environment. The DOPE method is generally used to assess the quality of a structure model as a whole. Alternatively, DOPE can also generate a residue-by-residue energy profile for the input model, making it possible for the user to spot the problematic region in the structure model. References External links MODELLER main site MODELLER manual Protein structure
Discrete optimized protein energy
[ "Chemistry", "Biology" ]
237
[ "Protein stubs", "Bioinformatics stubs", "Biotechnology stubs", "Biochemistry stubs", "Bioinformatics", "Structural biology", "Protein structure" ]
6,987,871
https://en.wikipedia.org/wiki/Governance%2C%20risk%20management%2C%20and%20compliance
Governance, risk, and compliance (GRC) is the term covering an organization's approach across these three practices: governance, risk management, and complianceamongst other disciplines. The first scholarly research on GRC was published in 2007 by OCEG's founder, Scott Mitchell, where GRC was formally defined as "the integrated collection of capabilities that enable an organization to reliably achieve objectives, address uncertainty and act with integrity" aka Principled Performance®. The research referred to common "keep the company on track" activities conducted in departments such as internal audit, compliance, risk, legal, finance, IT, HR as well as the lines of business, executive suite and the board itself. Overview Governance, risk, and compliance (GRC) are three related facets that aim to assure an organization reliably achieves objectives, addresses uncertainty and acts with integrity. Governance is the combination of processes established and executed by the directors (or the board of directors) that are reflected in the organization's structure and how it is managed and led toward achieving goals. Risk management is predicting and managing risks that could hinder the organization from reliably achieving its objectives under uncertainty. Compliance refers to adhering with the mandated boundaries (laws and regulations) and voluntary boundaries (company's policies, procedures, etc.). GRC is a discipline that aims to synchronize information and activity across governance, and compliance in order to operate more efficiently, enable effective information sharing, more effectively report activities and avoid wasteful overlaps. Although interpreted differently in various organizations, GRC typically encompasses activities such as corporate governance, enterprise risk management (ERM) and corporate compliance with applicable laws and regulations. Organizations reach a size where coordinated control over GRC activities is required to operate effectively. Each of these three disciplines creates information of value to the other two, and all three impact the same technologies, people, processes and information. Substantial duplication of tasks evolves when governance, risk management and compliance are managed independently. Overlapping and duplicated GRC activities negatively impact both operational costs and GRC matrices. For example, each internal service might be audited and assessed by multiple groups on an annual basis, creating enormous cost and disconnected results. A disconnected GRC approach will also prevent an organization from providing real-time GRC executive reports. GRC supposes that this approach, like a badly planned transport system, every individual route will operate, but the network will lack the qualities that allow them to work together effectively. If not integrated, if tackled in a traditional "silo" approach, most organizations must sustain unmanageable numbers of GRC-related requirements due to changes in technology, increasing data storage, market globalization and increased regulation. GRC topics Basic concepts Governance describes the overall management approach through which senior executives direct and control the entire organization, using a combination of management information and hierarchical management control structures. Governance activities ensure that critical management information reaching the executive team is sufficiently complete, accurate and timely to enable appropriate management decision making, and provide the control mechanisms to ensure that strategies, directions and instructions from management are carried out systematically and effectively. Risk management is the set of processes through which management identifies, analyzes, and, where necessary, responds appropriately to risks that might adversely affect realization of the organization's business objectives. The response to risks typically depends on their perceived gravity, and involves controlling, avoiding, accepting or transferring them to a third party, whereas organizations routinely manage a wide range of risks (e.g. technological risks, commercial/financial risks, information security risks etc.). Compliance means conforming with stated requirements. At an organizational level, it is achieved through management processes which identify the applicable requirements (defined for example in laws, regulations, contracts, strategies and policies), assess the state of compliance, assess the risks and potential costs of non-compliance against the projected expenses to achieve compliance, and hence prioritize, fund and initiate any corrective actions deemed necessary. Compliance administration refers to the administrative exercise of keeping all the compliance documents up to date, maintaining the currency of the risk controls and producing the compliance reports. Obligational awareness refers to the ability of the organization to make itself aware of all of its mandatory and voluntary obligations, namely relevant laws, regulatory requirements, industry codes and organizational standards, as well as standards of good governance, generally accepted best practices, ethics and community expectations. These obligations may be financial, strategic or operational where operational includes such diverse areas as property safety, product safety, food safety, workplace health and safety, asset maintenance, etc. GRC market segmentation A GRC program can be instituted to focus on any individual area within the enterprise, or a fully integrated GRC is able to work across all areas of the enterprise, using a single framework. A fully integrated GRC uses a single core set of control material, mapped to all of the primary governance factors being monitored. The use of a single framework also has the benefit of reducing the possibility of duplicated remedial actions. When reviewed as individual GRC areas, the most common individual headings are considered to be Financial GRC, Operational GRC, WHS GRC, IT GRC, and Legal GRC. Financial GRC relates to the activities that are intended to ensure the correct operation of all financial processes, as well as compliance with any finance-related mandates. Operational GRC relates to all operational activities such as property safety, product safety, food safety, workplace health and safety, IT compliance asset maintenance, etc. WHS GRC, a subset of Operational GRC, relates to all workplace health and safety activities IT GRC, a subset of Operational GRC, relates to the activities intended to ensure that the IT (Information Technology) organization supports the current and future needs of the business, and complies with all IT-related mandates. Legal GRC focuses on tying together all three components via an organization's legal department and chief compliance officer. This however can be misleading as ISO 37301 refers to mandatory and voluntary obligations and a focus on legal GRC can introduce bias. The AICD (Australian Institute of Company Directors) however splits risk into three super groups Financial Risk Operational Risk Strategic Risk Analysts disagree on how these aspects of GRC are defined as market categories. Gartner has stated that the broad GRC market includes the following areas: Finance and audit GRC IT GRC management Enterprise risk management. They further divide the IT GRC management market into these key capabilities. Controls and policy library Policy distribution and response IT Controls self-assessment and measurement IT Asset repository Automated general computer control (GCC) collection Remediation and exception management Reporting Advanced IT risk evaluation and compliance dashboards GRC product vendors The distinctions between the sub-segments of the broad GRC market are often not clear. With a large number of vendors entering this market recently, determining the best product for a given business problem can be challenging. Given that the analysts do not fully agree on the market segmentation, vendor positioning can increase the confusion. Owing to the dynamic nature of this market, any vendor analysis is often out of date relatively soon after its publication. Broadly, the vendor market can be considered to exist in three segments: Integrated GRC solutions (multi-governance interest, enterprise wide) Domain specific GRC solutions (single governance interest, enterprise wide) Point solutions to GRC (relate to enterprise wide governance or enterprise wide risk or enterprise wide compliance but not in combination.) Integrated GRC solutions attempt to unify the management of these areas, rather than treat them as separate entities. An integrated solution is able to administer one central library of compliance controls, but manage, monitor and present them against every governance factor. For example, in a domain specific approach, three or more findings could be generated against a single broken activity. The integrated solution recognizes this as one break relating to the mapped governance factors. Domain specific GRC vendors understand the cyclical connection between governance, risk and compliance within a particular area of governance. For example, within financial processing — that a risk will either relate to the absence of a control (need to update governance) and/or the lack of adherence to (or poor quality of) an existing control. An initial goal of splitting out GRC into a separate market has left some vendors confused about the lack of movement. It is thought that a lack of deep education within a domain on the audit side, coupled with a mistrust of audit in general causes a rift in a corporate environment. However, there are vendors in the marketplace that, while remaining domain-specific, have begun marketing their product to end users and departments that, while either tangential or overlapping, have expanded to include the internal corporate internal audit (CIA) and external audit teams (tier 1 big four AND tier two and below), information security and operations/production as the target audience. This approach provides a more 'open book' approach into the process. If the production team will be audited by CIA using an application that production also has access to, is thought to reduce risk more quickly as the end goal is not to be 'compliant' but to be 'secure,' or as secure as possible. You can also try the various GRC Tools available in market which are based on automation and can reduce your work load. Point solutions to GRC are marked by their focus on addressing only one of its areas. In some cases of limited requirements, these solutions can serve a viable purpose. However, because they tend to have been designed to solve domain specific problems in great depth, they generally do not take a unified approach and are not tolerant of integrated governance requirements. Information systems will address these matters better if the requirements for GRC management are incorporated at the design stage, as part of a coherent framework. GRC data warehousing and business intelligence GRC vendors with an integrated data framework are now able to offer custom built GRC data warehouse and business intelligence solutions. This allows high value data from any number of existing GRC applications to be collated and analysed. The aggregation of GRC data using this approach adds significant benefit in the early identification of risk and business process (and business control) improvement. Further benefits to this approach include (i) it allows existing, specialist and high value applications to continue without impact (ii) organizations can manage an easier transition into an integrated GRC approach because the initial change is only adding to the reporting layer and (iii) it provides a real-time ability to compare and contrast data value across systems that previously had no common data scheme.' GRC research Each of the core disciplines – Governance, Risk Management and Compliance – consists of the four basic components: strategy, processes, technology and people. The organisation's risk appetite, its internal policies and external regulations constitute the rules of GRC. The disciplines, their components and rules are now to be merged in an integrated, holistic and organisation-wide (the three main characteristics of GRC) manner – aligned with the (business) operations that are managed and supported through GRC. In applying this approach, organisations long to achieve the objectives: ethically correct behaviour, and improved efficiency and effectiveness of any of the elements involved. See also Conformity assessment Information governance ISO 37301:2021 Compliance Management Systems (Previously ISO 19600) ISO 31000:2018 Risk Management ISO 41001:2018 Facility management — Management systems Legal governance, risk management, and compliance Records management Regulatory compliance References Business software Enterprise modelling
Governance, risk management, and compliance
[ "Engineering" ]
2,329
[ "Systems engineering", "Enterprise modelling" ]
6,988,010
https://en.wikipedia.org/wiki/Institute%20of%20Marine%20Engineering%2C%20Science%20and%20Technology
The Institute of Marine Engineering, Science and Technology (IMarEST) is the international membership body and learned society for marine professionals operating in the spheres of marine engineering, science, or technology. It has registered charity status in the UK. It has a worldwide membership of over 12,000 individuals based in over 128 countries. The institute is a member of the UK Science Council and a licensed body of the Engineering Council UK. Overview The Institute of Marine Engineering, Science and Technology was the international membership body and learned society for professionals operating in the spheres of marine engineering, science, or technology. The Institute envisions "a world where marine resources and activities are sustained, managed and developed for the benefit of humanity." The mission of the institute is described as "to work within the global marine community to promote the scientific development of marine engineering, science and technology, providing opportunities for the exchange of ideas and practices and upholding the status, standards and knowledge of marine professionals worldwide." IMarEST is also a publisher of books, periodicals, journals and papers related to marine engineering, science and technology, and organises meetings, events and conferences related to these themes. The institute is also the home of the Guild of Benevolence of the IMarEST, which continues the work of the fund founded for the families of the engineers of the Titanic, and which today provides help and funds for those seafarers and others who find themselves in hard times. History The Institute of Marine Engineers had its headquarters at 88 Minories in the City of London. It changed its name to the IMarEST in 1999. It has since moved to a location in Westminster. Since January 2024, its Chief Executive has been Chris Goldsworthy. Secretary/Chief Executive James Adamson, Secretary, 1889–1931 Bernard Charles Curling, Secretary, 1930–1951 J. Stuart Robinson, Secretary, 1951–1987 Jolyon Slogget, Secretary, 1986–1999 Keith Read, Secretary, 1999–2009 Marcus Jones, Secretary, 2009–2011 David Loosley, CEO, 2011–2020 Gwynne Lewis, CEO, 2020–2024 Chris Goldsworthy, CEO, 2024 - IMarEST topics International standing The IMarEST has special consultative status with the Economic and Social Council of the United Nations (ECOSOC) and is a nominated and licensed body of the Engineering Council (UK), a member of the Science Council and has links with many other maritime organisations worldwide. The IMarEST's international dimension is reinforced by the activities of its divisions and branches located across the globe: European Division – 25 branches ANZSPAC (Australia, New Zealand & South Pacific) Division – 9 branches Middle East Division – 5 branches South East Asia Division – 3 branches Americas Division – 3 branches North East Asia Division – 2 branches These branches provide a local focus to activities, networks, conferences, meetings and events, and for developing and maintaining links and partnerships with people and organisations in key regions in the marine world. Members The IMarEST has different categories of membership for those who are seeking professional recognition (Corporate Membership), for those who are currently studying or just starting out in their careers or those who simply have a general interest in the IMarEST, its work, its members, events, publications or facilities (non-Corporate Membership). IMarEST members include those working in: Commercial Shipping Fishing, Aquaculture & Biotechnology Ship design, construction, maintenance & decommission Marine Finance, Insurance & Risk Ports & Harbours Marine Law, Governance & Regulation Defence and Naval Engineering Marine Renewables Marine Engineering Systems & Equipment Marine Safety & Security Power & Propulsion Oceanography, Climatology & Marine Meteorology Coast & Ocean Mapping & Hydrography Natural Hazards Navigation & Communication Offshore Oil & Gas Marine Environment & Pollution Underwater Technology & Operations Coastal & Shelf Seas Marine Leisure Marine Surveyors ...plus additional marine science, engineering and technology disciplines and applications. Corporate membership categories IMarEST have defined three types of membership categories: Fellow : Fellows are those who qualify for the category of Member and have demonstrated to the satisfaction of Council a level of knowledge and understanding, competence and commitment involving superior responsibility for the conceptual design, management or the execution of important work in a marine related profession, and have given a commitment to abide by the institute's Code of Professional Conduct. Member : Members are those who qualify for the category of Associate Member and have demonstrated to the satisfaction of Council that they have achieved a position of professional standing having normally been professionally engaged in the marine sector for a period of five years that includes significant responsibility and have given a commitment to abide by the institute's Code of Professional Conduct Associate Member : Associate Members those demonstrating to the satisfaction of Council that they have achieved a position as a technician, or are professionally engaged in Initial Professional Development or occupy an occupational role in the marine sector, and have given a commitment to abide by the institute's Code of Professional Conduct And two types of non-corporate membership categories: Affiliate : Affiliates may either be those with an interest in, or who may contribute to, the activities of the Institute or; persons who, in the opinion of Council, can contribute to, or wish to have access to, the technical services of the institute, being resident in a recognised overseas territory and also members of a professional society with which the institute has a reciprocal arrangement. Student : Student members are those enrolled on a programme of further or higher education accredited or recognised by the IMarEST. Registration In addition to Membership, the IMarEST is licensed to provide a range of professional registers covering the fields of engineering, science and technology. In addition, the IMarEST's Royal Charter empowers the institute to offer registers designed to meet the specific needs of the marine profession. Corporate members can become registered (chartered) as: Engineers Chartered Engineer Chartered Marine Engineer Incorporated Engineer Incorporated Marine Engineer Engineering Technician Marine Engineering Technician Scientists Chartered Scientist Chartered Marine Scientist Registered Marine Scientist Marine Technician Technologists Chartered Marine Technologist Registered Marine Technologist Marine Technician Magazines The IMarEST used to publish multiple professional magazines for the science, engineering and technology community, but in October 2014 amalgamated content from its five established and sector specific magazines (MER, Shipping World and Shipbuilder, Maritime IT & Electronics, Offshore Technology and Marine Scientist) in to a single, generic publication. Marine Professional, published by Think Publishing. describes itself as "looking at the trends emerging within the marine sector with a view to enhance the reader's understanding of the complex technical intersections between the maritime, offshore and science agendas." It refers to itself as "the voice of marine…" It is published on a monthly basis and is distributed in print and online. Technical and scientific journals In addition to the professional magazines outlined above, the IMarEST also publishes a number of subscription only, academic, peer-reviewed journals which present international research papers describing the latest discoveries, developments and advances in the marine sector. Both the Journal of Marine Engineering & Technology and the Journal of Operational Oceanography are peer-reviewed and are included in the Science Citation Index Expanded. The Journal of Marine Engineering & Technology (JMET)Published three times a year in print and online, The Journal of Marine Engineering and Technology contains papers of a specialist academic nature covering research, theory and scientific studies concerned with all aspects of marine engineering and technology. Editors: Dr J Wang (Liverpool John Moores University) and CDR(E) Rinze Geertsma, (Royal Netherlands Naval Academy) Journal of Operational Oceanography (JOO)Published twice a year in print and online, The Journal of Operational Oceanography disseminates and reports on scientific and applied research advances associated with all aspects of operational oceanography. The journal incorporates papers that examine the role of oceanography in contributing to all marine disciplines, address the needs of one or more of a wide range of end user communities and address the requirements of global observing systems. Editor: Prof Ralph Rayner, CMarSci, FIMarEST, London School of Economics (LSE) Papers published in the JMET and the JOO are eligible for the IMarEST Denny Medal, a special annual prize awarded to the authors of the best paper in each Journal. Books IMarEST Publications produces books for marine students, engineers and technologists with Witherby Publishing Group. The following is a selection of some of the book titles published by IMarEST: Marine medium speed diesel engines Marine low speed diesel engines Fire safety at sea Safe operation of marine power plant Operation & maintenance of machinery in motorships A practical guide to marine fuel oil handling Exhaust emissions from combustion machinery Controllable pitch propellers Design of propulsion and electric power generation systems Design and operation of marine air compressors Design for safety of marine and offshore systems Events and conferences Conferences A number of technical and scientific conferences are organised and run by the IMarEST annually. Producing their own conference proceedings, they offer an opportunity to learn of the latest marine research. Examples include: Engine as a Weapon (EAAW) International Naval Engineering Conference (INEC) International Ship Controls System Symposium (iSCSS) Oceans of Knowledge (OOK) The IMarEST develops a programme of evening lectures each year covering general and specific technical and scientific topics. Recordings of these lectures and any associated slides will be available for members to access online. Examples of lectures include: The IMarEST Stanley Gray Lectures – These lectures, held three times a year, are given by members of the profession. The lectures cover all three aspects of the institute's remit, marine engineering, marine science and marine technology. The IMarEST Lord Kelvin Lectures – One of the IMarEST's first presidents, this lecture series was introduced to commemorate the 100th anniversary of Lord Kelvin's death. Lectures are given by members of the maritime community taking Lord Kelvin's foresight and applying it to the development of future technologies Branch events In addition, branches also have their own technical and social events which are advertised through the IMarEST website and publications. References External links Institute of Marine Engineering, Science and Technology 1889 establishments in the United Kingdom ECUK Licensed Members Institution of Mechanical Engineers Learned societies of the United Kingdom Marine engineering organizations Maritime history of the United Kingdom Organisations based in the City of Westminster Scientific organizations established in 1889
Institute of Marine Engineering, Science and Technology
[ "Engineering" ]
2,077
[ "Institution of Mechanical Engineers", "Marine engineering organizations", "Mechanical engineering organizations", "Marine engineering" ]
6,988,041
https://en.wikipedia.org/wiki/Marine%20current%20power
Marine currents can carry large amounts of water, largely driven by the tides, which are a consequence of the gravitational effects of the planetary motion of the Earth, the Moon and the Sun. Augmented flow velocities can be found where the underwater topography in straits between islands and the mainland or in shallows around headlands plays a major role in enhancing the flow velocities, resulting in appreciable kinetic energy. The Sun acts as the primary driving force, causing winds and temperature differences. Because there are only small fluctuations in current speed and stream location with minimal changes in direction, ocean currents may be suitable locations for deploying energy extraction devices such as turbines. Other effects such as regional differences in temperature and salinity and the Coriolis effect due to the rotation of the earth are also major influences. The kinetic energy of marine currents can be converted in much the same way that a wind turbine extracts energy from the wind, using various types of open-flow rotors. Energy potential The total worldwide power in ocean currents has been estimated to be about 5,000 GW, with power densities of up to 15 kW/m2. The relatively constant extractable energy density near the surface of the Florida Straits Current is about 1 kW/m2 of flow area. It has been estimated that capturing just 1/1,000th of the available energy from the Gulf Stream, which has 21,000 times more energy than Niagara Falls in a flow of water that is 50 times the total flow of all the world's freshwater rivers, would supply Florida with 35% of its electrical needs. The image to the right illustrates the high density of flow along the coast, note the high velocity white northward flow, perfect for extraction of ocean current energy. Countries that are interested in and pursuing the application of ocean current energy technologies include the European Union, Japan, the United States, and China. The potential of electric power generation from marine tidal currents is enormous. There are several factors that make electricity generation from marine currents very appealing when compared to other renewables: The high load factors resulting from the fluid properties. The predictability of the resource, so that, unlike most of other renewables, the future availability of energy can be known and planned for. The potentially large resource that can be exploited with little environmental impact, thereby offering one of the least damaging methods for large-scale electricity generation. The feasibility of marine-current power installations to provide also base grid power, especially if two or more separate arrays with offset peak-flow periods are interconnected. Technologies for marine-current-power generation There are several types of open-flow devices that can be used in marine-current-power applications; many of them are modern descendants of the water wheel or similar. However, the more technically sophisticated designs, derived from wind-power rotors, are the most likely to achieve enough cost-effectiveness and reliability to be practical in a massive marine-current-power future scenario. Even though there is no generally accepted term for these open-flow hydro turbines, some sources refer to them as water-current turbines. There are three main types of water current turbines that might be considered: axial-flow horizontal-axis propellers (with both variable-pitch or fixed-pitch), underwater kites and cross-flow Darrieus rotors. The rotor types may be combined with any of the three main methods for supporting water-current turbines: floating moored systems, sea-bed mounted systems, and intermediate systems. Sea-bed-mounted monopile structures constitute the first-generation marine current power systems. They have the advantage of using existing (and reliable) engineering know-how, but they are limited to relatively shallow waters (about depth). History and application The possible use of marine currents as an energy resource began to draw attention in the mid-1970s after the first oil crisis. In 1974 several conceptual designs were presented at the MacArthur Workshop on Energy, and in 1976 the British General Electric Co. undertook a partially government-funded study which concluded that marine current power deserved more detailed research. Soon after, the ITD-Group in UK implemented a research program involving a year of performance testing of a 3-m hydroDarrieus rotor deployed at Juba on the White Nile. The 1980s saw a number of small research projects to evaluate marine current power systems. The main countries where studies were carried out were the UK, Canada, and Japan. In 1992–1993 the Tidal Stream Energy Review identified specific sites in UK waters with suitable current speed to generate up to 58 TWh/year. It confirmed a total marine current power resource capable theoretically of meeting some 19% of the UK electricity demand. In 1994–1995 the EU-JOULE CENEX project identified over 100 European sites ranging from 2 to 200 km2of sea-bed area, many with power densities above 10 MW/km2. Both the UK Government and the EU have committed themselves to internationally negotiated agreements designed to combat global warming. In order to comply with such agreements, an increase in large-scale electricity generation from renewable resources will be required. Marine currents have the potential to supply a substantial share of future EU electricity needs. The study of 106 possible sites for tidal turbines in the EU showed a total potential for power generation of about 50 TWh/year. If this resource is to be successfully utilized, the technology required could form the basis of a major new industry to produce clean power for the 21st century. Contemporary applications of these technologies can be found here: List of tidal power stations. Since the effects of tides on ocean currents are so large, and their flow patterns are quite reliable, many ocean current energy extraction plants are placed in areas of high tidal flow rates. Research on marine current power is conducted at, among others, Uppsala University in Sweden, where a test unit with a straight-bladed Darrieus type turbine has been constructed and placed in the Dal river in Sweden. Environmental effects Ocean currents are instrumental in determining the climate in many regions around the world. While little is known about the effects of removing ocean current energy, the impacts of removing current energy on the farfield environment may be a significant environmental concern. The typical turbine issues with blade strike, entanglement of marine organisms, and acoustic effects still exists; however, these may be magnified due to the presence of more diverse populations of marine organisms using ocean currents for migration purposes. Locations can be further offshore and therefore require longer power cables that could affect the marine environment with electromagnetic output. The Tethys database provides access to scientific literature and general information on the potential environmental effects of ocean current energy. See also References External links Portal and Repository for Information on Marine Renewable Energy A network of databases providing broad access to marine energy information. Marine Energy Basics: Current Energy Basic information about current energy. Marine Energy Projects Database A database that provides up-to-date information on marine energy deployments in the U.S. and around the world. Tethys Database A database of information on potential environmental effects of marine energy and offshore wind energy development. Tethys Engineering Database A database of information on technical design and engineering of marine energy devices. Marine and Hydrokinetic Data Repository A database for all data collected by marine energy research and development projects funded by the U.S. Department of Energy. Marine energy Ocean currents
Marine current power
[ "Chemistry" ]
1,482
[ "Ocean currents", "Fluid dynamics" ]
6,988,073
https://en.wikipedia.org/wiki/Nanovid%20microscopy
Nanovid microscopy, from "nanometer video-enhanced microscopy", is a microscopic technique aimed at visualizing colloidal gold particles of 20–40 nm diameter (nanogold, immunogold) as dynamic markers at the light-microscopic level. The nanogold particles as such are smaller than the diffraction limit of light, but can be visualized by using video-enhanced differential interference contrast (VEDIC). The technique is based on the use of contrast enhancement by video techniques and digital image processing. Nanovid microscopy, by combining small colloidal gold probes with video-enhanced quantitative microscopy, allows studying the intracellular dynamics of specific proteins in living cells. See also Microscopy Single-particle tracking Differential interference contrast microscopy Microtubule References De Mey, J., Moeremans, M., Geuens, G., Nuydens, R., De Brabander, M., High resolution light and electron microscopic localization of tubulin with the IGS (immuno-gold staining) method, Cell Biol. Int. Rep. 5, 889-899 (1981). De Brabander M, Nuydens R, Geuens G, Moeremans M, De Mey J., The use of submicroscopic gold particles combined with video contrast enhancement as a simple molecular probe for the living cell, Cell Motil Cytoskeleton. 1986;6(2):105-13. de Brabander, M. Nuydens, R. Ishihara, A. Holifield, B. Jacobson, K. and Geerts, H., Lateral diffusion and retrograde movements of individual cell surface components on single motile cells observed with Nanovid microscopy, J. Cell Biol., 112, 111-124 (1991). Degelos SD, Wilson MP, Chandler JE., Nanovid microscopy for assessing sperm membrane changes induced by in vitro capacitating and acrosomal reacting procedures, J Androl. 1994 Sep-Oct;15(5):462-7. Geerts H, De Brabander M, Nuydens R, Geuens S, Moeremans M, De Mey J, Hollenbeck P., Nanovid tracking: a new automatic method for the study of mobility in living cells based on colloidal gold and video microscopy, Biophys J. 1987 Nov;52(5):775-82. Geerts H, de Brabander M, Nuydens R., Nanovid microscopy, Nature. 1991 Jun 27;351(6329):765-6. Kusumi A, Sako Y, Yamamoto M., Confined lateral diffusion of membrane receptors as studied by single particle tracking (nanovid microscopy). Effects of calcium-induced differentiation in cultured epithelial cells, Biophys J. 1993 Nov;65(5):2021-40. External links Quantitative Microscopy Cell imaging Microscopy Laboratory techniques
Nanovid microscopy
[ "Chemistry", "Biology" ]
624
[ "Cell imaging", "nan", "Microscopy" ]
6,988,121
https://en.wikipedia.org/wiki/Arithmetic%20and%20geometric%20Frobenius
In mathematics, the Frobenius endomorphism is defined in any commutative ring R that has characteristic p, where p is a prime number. Namely, the mapping φ that takes r in R to rp is a ring endomorphism of R. The image of φ is then Rp, the subring of R consisting of p-th powers. In some important cases, for example finite fields, φ is surjective. Otherwise φ is an endomorphism but not a ring automorphism. The terminology of geometric Frobenius arises by applying the spectrum of a ring construction to φ. This gives a mapping φ*: Spec(Rp) → Spec(R) of affine schemes. Even in cases where Rp = R this is not the identity, unless R is the prime field. Mappings created by fibre product with φ*, i.e. base changes, tend in scheme theory to be called geometric Frobenius. The reason for a careful terminology is that the Frobenius automorphism in Galois groups, or defined by transport of structure, is often the inverse mapping of the geometric Frobenius. As in the case of a cyclic group in which a generator is also the inverse of a generator, there are in many situations two possible definitions of Frobenius, and without a consistent convention some problem of a minus sign may appear. References , p. 5 Mathematical terminology Algebraic geometry Algebraic number theory
Arithmetic and geometric Frobenius
[ "Mathematics" ]
297
[ "Algebraic geometry", "Fields of abstract algebra", "Algebraic number theory", "nan", "Number theory" ]
6,988,390
https://en.wikipedia.org/wiki/Cryptographic%20nonce
In cryptography, a nonce is an arbitrary number that can be used just once in a cryptographic communication. It is often a random or pseudo-random number issued in an authentication protocol to ensure that each communication session is unique, and therefore that old communications cannot be reused in replay attacks. Nonces can also be useful as initialization vectors and in cryptographic hash functions. Definition A nonce is an arbitrary number used only once in a cryptographic communication, in the spirit of a nonce word. They are often random or pseudo-random numbers. Many nonces also include a timestamp to ensure exact timeliness, though this requires clock synchronisation between organisations. The addition of a client nonce ("cnonce") helps to improve the security in some ways as implemented in digest access authentication. To ensure that a nonce is used only once, it should be time-variant (including a suitably fine-grained timestamp in its value), or generated with enough random bits to ensure an insignificantly low chance of repeating a previously generated value. Some authors define pseudo-randomness (or unpredictability) as a requirement for a nonce. Nonce is a word dating back to Middle English for something only used once or temporarily (often with the construction "for the nonce"). It descends from the construction "then anes" ("the one [purpose]"). The term might also be derived from "number used only once". In Britain the term may be avoided as "nonce" in modern British English means a paedophile. Usage Authentication Authentication protocols may use nonces to ensure that old communications cannot be reused in replay attacks. For instance, nonces are used in HTTP digest access authentication to calculate an MD5 digest of the password. The nonces are different each time the 401 authentication challenge response code is presented, thus making replay attacks virtually impossible. The scenario of ordering products over the Internet can provide an example of the usefulness of nonces in replay attacks. An attacker could take the encrypted information and—without needing to decrypt—could continue to send a particular order to the supplier, thereby ordering products over and over again under the same name and purchase information. The nonce is used to give 'originality' to a given message so that if the company receives any other orders from the same person with the same nonce, it will discard those as invalid orders. A nonce may be used to ensure security for a stream cipher. Where the same key is used for more than one message and then a different nonce is used to ensure that the keystream is different for different messages encrypted with that key; often the message number is used. Secret nonce values are used by the Lamport signature scheme as a signer-side secret which can be selectively revealed for comparison to public hashes for signature creation and verification. Initialization vectors Initialization vectors may be referred to as nonces, as they are typically random or pseudo-random. Hashing Nonces are used in proof-of-work systems to vary the input to a cryptographic hash function so as to obtain a hash for a certain input that fulfils certain arbitrary conditions. In doing so, it becomes far more difficult to create a "desirable" hash than to verify it, shifting the burden of work onto one side of a transaction or system. For example, proof of work, using hash functions, was considered as a means to combat email spam by forcing email senders to find a hash value for the email (which included a timestamp to prevent pre-computation of useful hashes for later use) that had an arbitrary number of leading zeroes, by hashing the same input with a large number of values until a "desirable" hash was obtained. Similarly, the Bitcoin blockchain hashing algorithm can be tuned to an arbitrary difficulty by changing the required minimum/maximum value of the hash so that the number of bitcoins awarded for new blocks does not increase linearly with increased network computation power as new users join. This is likewise achieved by forcing Bitcoin miners to add nonce values to the value being hashed to change the hash algorithm output. As cryptographic hash algorithms cannot easily be predicted based on their inputs, this makes the act of blockchain hashing and the possibility of being awarded bitcoins something of a lottery, where the first "miner" to find a nonce that delivers a desirable hash is awarded bitcoins. See also Key stretching Salt (cryptography) Nonce word References External links – HTTP Authentication: Basic and Digest Access Authentication – Robust Explicit Congestion Notification (ECN) Signaling with Nonces – UMAC: Message Authentication Code using Universal Hashing Web Services Security Cryptography
Cryptographic nonce
[ "Mathematics", "Engineering" ]
986
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
6,988,866
https://en.wikipedia.org/wiki/Directed%20percolation
In statistical physics, directed percolation (DP) refers to a class of models that mimic filtering of fluids through porous materials along a given direction, due to the effect of gravity. Varying the microscopic connectivity of the pores, these models display a phase transition from a macroscopically permeable (percolating) to an impermeable (non-percolating) state. Directed percolation is also used as a simple model for epidemic spreading with a transition between survival and extinction of the disease depending on the infection rate. More generally, the term directed percolation stands for a universality class of continuous phase transitions which are characterized by the same type of collective behavior on large scales. Directed percolation is probably the simplest universality class of transitions out of thermal equilibrium. Lattice models One of the simplest realizations of DP is bond directed percolation. This model is a directed variant of ordinary (isotropic) percolation and can be introduced as follows. The figure shows a tilted square lattice with bonds connecting neighboring sites. The bonds are permeable (open) with probability and impermeable (closed) otherwise. The sites and bonds may be interpreted as holes and randomly distributed channels of a porous medium. The difference between ordinary and directed percolation is illustrated to the right. In isotropic percolation a spreading agent (e.g. water) introduced at a particular site percolates along open bonds, generating a cluster of wet sites. Contrarily, in directed percolation the spreading agent can pass open bonds only along a preferred direction in space, as indicated by the arrow. The resulting red cluster is directed in space. As a dynamical process Interpreting the preferred direction as a temporal degree of freedom, directed percolation can be regarded as a stochastic process that evolves in time. In a minimal, two-parameter model that includes bond and site DP as special cases, a one-dimensional chain of sites evolves in discrete time , which can be viewed as a second dimension, and all sites are updated in parallel. Activating a certain site (called initial seed) at time the resulting cluster can be constructed row by row. The corresponding number of active sites varies as time evolves. Universal scaling behavior The DP universality class is characterized by a certain set of critical exponents. These exponents depend on the spatial dimension . Above the so-called upper critical dimension they are given by their mean-field values while in dimensions they have been estimated numerically. Current estimates are summarized in the following table: Other examples In two dimensions, the percolation of water through a thin tissue (such as toilet paper) has the same mathematical underpinnings as the flow of electricity through two-dimensional random networks of resistors. In chemistry, chromatography can be understood with similar models. The propagation of a tear or rip in a sheet of paper, in a sheet of metal, or even the formation of a crack in ceramic bears broad mathematical resemblance to the flow of electricity through a random network of electrical fuses. Above a certain critical point, the electrical flow will cause a fuse to pop, possibly leading to a cascade of failures, resembling the propagation of a crack or tear. The study of percolation helps indicate how the flow of electricity will redistribute itself in the fuse network, thus modeling which fuses are most likely to pop next, and how fast they will pop, and what direction the crack may curve in. Examples can be found not only in physical phenomena, but also in biology, neuroscience, ecology (e.g. evolution), and economics (e.g. diffusion of innovation). Percolation can be considered to be a branch of the study of dynamical systems or statistical mechanics. In particular, percolation networks exhibit a phase change around a critical threshold. Experimental realizations In spite of vast success in the theoretical and numerical studies of DP, obtaining convincing experimental evidence has proved challenging. In 1999 an experiment on flowing sand on an inclined plane was identified as a physical realization of DP. In 2007, critical behavior of DP was finally found in the electrohydrodynamic convection of liquid crystal, where a complete set of static and dynamic critical exponents and universal scaling functions of DP were measured in the transition to spatiotemporal intermittency between two turbulent states. See also Percolation threshold Ziff–Gulari–Barshad model Percolation critical exponents Sources Literature L. Canet: "Processus de réaction-diffusion : une approche par le groupe de renormalisation non perturbatif", Thèse. Thèse en ligne Muhammad Sahimi. Applications of Percolation Theory. Taylor & Francis, 1994. (cloth), (paper) Geoffrey Grimmett. Percolation (2. ed). Springer Verlag, 1999. References Sources Percolation theory Critical phenomena
Directed percolation
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,024
[ "Physical phenomena", "Phase transitions", "Critical phenomena", "Percolation theory", "Combinatorics", "Condensed matter physics", "Statistical mechanics", "Dynamical systems" ]
6,988,897
https://en.wikipedia.org/wiki/Defense%20physiology
Defense physiology is a term used to refer to the symphony of body function (physiology) changes which occur in response to a stress or threat. When the body executes the "fight-or-flight" reaction or stress response, the nervous system initiates, coordinates and directs specific changes in how the body is functioning, preparing the body to deal with the threat. (See also General adaptation syndrome.) Definitions Stress : As it pertains to the term defense physiology, the term stress refers to a perceived threat to the continued functioning of the body / life according to its current state. Threat: A threat may be consciously recognized or not. A physical event (a loud noise or car collision or a coming attack), a chemical or a biological agent which alters (or has the possibility to alter) body function (physiology) away from optimum or healthy functioning (or away from its current state of functioning) may be perceived as a threat (also called a stressor). Life circumstances, though posing no immediate physical danger, could be perceived as a threat. Anything that could change the continuing of the person’s life as they are currently experiencing it could be perceived as a threat. Physiological reactions to threat (or perceived threat) A threat may be either empirical (an outside observer may agree that the event or circumstance poses a threat) or a priori (an outside observer would not agree that the event or circumstance poses a threat). What is important to the individual, in terms of the body’s response, is that a threat is perceived. The perception of a threat may also trigger an associated ‘feeling of distress’. Physiological reactions triggered by mind cannot differentiate both the physical or mental threat separately, Hence the "fight-or-flight" response of mind for the both reactions will be same. Duration of threat and its different physiological effects on the nervous system. Acute Stress Reaction - The body executes the “fight-or-flight” reaction to get the body out of danger quickly. When the timing between the threat and the resolution of the threat are close, the “fight-or-flight" reaction is executed, the threat is handled, and the body returns to its previous state (taking care of the business of life - digestion, relaxation, tissue repair etc.). The body has evolved to stay in this mode for only a short time. Chronic Stress State - When the timing between the threat and the resolution of the threat are more distant (the threat or the perception of threat is prolonged or other threats occur before the body has recovered), the “fight-or-flight" reaction continues and becomes the new "standard operating condition" of the body, "chronic defense physiology". Continuing in this mode produces significant negative effects (distress) in many aspects of body functioning (physical, mental and emotional distress). See also Hypothalamic–pituitary–adrenal axis References Physiology Stress (biology) Endocrine system
Defense physiology
[ "Biology" ]
602
[ "Organ systems", "Endocrine system", "Physiology" ]
979,503
https://en.wikipedia.org/wiki/Graham%20number
The Graham number or Benjamin Graham number is a figure used in securities investing that measures a stock's so-called fair value. Named after Benjamin Graham, the founder of value investing, the Graham number can be calculated as follows: The final number is, theoretically, the maximum price that a defensive investor should pay for the given stock. Put another way, a stock priced below the Graham Number would be considered a good value, if it also meets a number of other criteria. The Number represents the geometric mean of the maximum that one would pay based on earnings and based on book value. Graham writes: Alternative calculation Earnings per share is calculated by dividing net income by shares outstanding. Book value is another way of saying shareholders' equity. Therefore, book value per share is calculated by dividing equity by shares outstanding. Consequently, the formula for the Graham number can also be written as follows: See also Altman Z-score Beneish M-score Ohlson O-score Fundamental analysis Magic formula investing Value investing References Valuation (finance) Mathematical finance
Graham number
[ "Mathematics" ]
210
[ "Applied mathematics", "Mathematical finance" ]
979,514
https://en.wikipedia.org/wiki/Glycophorin%20C
Glycophorin C (GYPC; CD236/CD236R; glycoprotein beta; glycoconnectin; PAS-2) plays a functionally important role in maintaining erythrocyte shape and regulating membrane material properties, possibly through its interaction with protein 4.1. Moreover, it has previously been shown that membranes deficient in protein 4.1 exhibit decreased content of glycophorin C. It is also an integral membrane protein of the erythrocyte and acts as the receptor for the Plasmodium falciparum protein PfEBP-2 (erythrocyte binding protein 2; baebl; EBA-140). History The antigen was discovered in 1960 when three women who lacked the antigen made anti-Gea in response to pregnancy. The antigen is named after one of the patients – a Mrs Gerbich. The following year a new but related antigen was discovered in a Mrs Yus for whom an antigen in this system is also named. In 1972 a numerical system for the antigens in this blood group was introduced. Genomics Despite the similar names glycophorin C and D are unrelated to the other three glycophorins which encoded on chromosome 4 at location 4q28-q31. These latter proteins are closely related. Glycophorin A and glycophorin B carry the blood group MN and Ss antigens respectively. There are ~225,000 molecules of GPC and GPD per erythrocyte. Originally it was thought that glycophorin C and D were the result of a gene duplication event but it was only later realised that they were encoded by the same gene. Glycophorin D (GPD) is generated from the glycophorin C messenger RNA by leaky translation at an in frame AUG at codon 30: glycophorin D = glycophorin C residues 30 to 128. This leaky translation appears to be a uniquely human trait. Glycophorin C (GPC) is a single polypeptide chain of 128 amino acids and is encoded by a gene on the long arm of chromosome 2 (2q14-q21). The gene was first cloned in 1989 by High et al. The GPC gene is organized in four exons distributed over 13.5 kilobase pairs of DNA. Exon 1 encodes residues 1-16, exon 2 residues 17-35, exon 3 residues 36-63 and exon 4 residues 64-128. Exons 2 and 3 are highly homologous, with less than 5% nucleotide divergence. These exons also differ by a 9 amino acid insert at the 3' end of exon 3. The direct repeated segments containing these exons is 3.4 kilobase pairs long and may be derived from a recent duplication of a single ancestral domain. Exons 1, 2 and most of exon 3 encode the N-terminal extracellular domain while the remainder of exon 3 and exon 4 encode transmembrane and cytoplasmic domains. Two isoforms are known and the gene is expressed in a wide variety of tissues including kidney, thymus, stomach, breast, adult liver and erythrocyte. In the non erythroid cell lines, expression is lower than in the erythrocyte and the protein is differentially glycosylated. In the erythrocyte glycophorin C makes up ~4% of the membrane sialoglycoproteins. The average number of O linked chains is 12 per molecule. The gene is expressed early in the development of the erythrocyte, specifically in the erythroid burst-forming unit and erythroid colony-forming unit. The mRNA from human erythroblasts is ~1.4 kilobases long and the transcription start site in erythroid cells has been mapped to 1050 base pairs 5' of the start codon. It is expressed early in development and before the Kell antigens, Rhesus-associated glycoprotein, glycophorin A, band 3, the Rhesus antigen and glycophorin B. In melanocytic cells Glycophorin C gene expression may be regulated by MITF. GPC appears to be synthesized in excess in the erythrocyte and that the membrane content is regulated by band 4.1 (protein 4.1). Additional data on the regulation of glycophorin C is here. In a study of this gene among the Hominoidea two finding unique to humans emerged: (1) an excess of non-synonymous divergence among species that appears to be caused solely by accelerated evolution and (2) the ability of the single GYPC gene to encode both the GPC and GPD proteins. The cause for this is not known but it was suggested that these findings might be the result of infection by Plasmodium falciparum. Molecular biology After separation of red cell membranes by SDS-polyacrylamide gel electrophoresis and staining with periodic acid-Schiff staining (PAS) four glycophorins have been identified. These have been named glycophorin A, B, C and D in order of the quantity present in the membrane – glycophorin A being the most and glycophorin D the least common. A fifth (glycophorin E) has been identified within the human genome but cannot easily be detected on routine gel staining. In total the glycophorins constitute ~2% of the total erythrocyte membrane protein mass. Confusingly these proteins are also known under different nomenclatures but they are probably best known as the glycophorins. Glycophorin C was first isolated in 1978. Glycophorin C and D are minor sialoglycoproteins contributing to 4% and 1% to the PAS-positive material and are present at about 2.0 and 0.5 x 105 copies/cell respectively. In polyacrylimide gels glycophorin C's apparent weight is 32 kilodaltons (32 kDa). Its structure is similar to that of other glycophorins: a highly glycoslated extracellular domain (residues 1-58), a transmembrane domain (residues 59-81) and an intracellular domain (residues 82-128). About 90% of the glycophorin C present in the erythrocyte is bound to the cytoskeleton and the remaining 10% moves freely within the membrane. Glycophorin D's apparent molecular weight is 23kDa. On average this protein has 6 O linked oligosaccharides per molecule. Within the erythrocyte it interacts with band 4.1 (an 80-kDa protein) and p55 (a palmitoylated peripheral membrane phosphoprotein and a member of the membrane-associated guanylate kinase family) to form a ternary complex that is critical for the shape and stability of erythrocytes. The major attachment sites between the erythrocyte spectrin-actin cytoskeleton and the lipid bilayer are glycophorin C and band 3. The interaction with band 4.1 and p55 is mediated by the N terminal 30 kD domain of band 4.1 binding to a 16 amino acid segment (residues 82-98: residues 61-77 of glycophorin D) within the cytoplasmic domain of glycophorin C and to a positively charged 39 amino acid motif in p55. The majority of protein 4.1 is bound to glycophorin C. The magnitude of the strength of the interaction between glycophorin C and band 4.1 has been estimated to be 6.9 microNewtons per meter, a figure typical of protein–protein interactions. Glycophorin C normally shows oscillatory movement in the erythrocyte membrane. This is reduced in Southeast Asian ovalocytosis a disease of erythrocytes due to a mutation in band 3. Transfusion medicine These glycophorins are associated with eleven antigens of interest to transfusion medicine: the Gerbich (Ge2, Ge3, Ge4), the Yussef (Yus), the Webb (Wb or Ge5), the Duch (Dh(a) or Ge8), the Leach, the Lewis II (Ls(a) or Ge6), the Ahonen (An(a) or Ge7) and GEPL (Ge10*), GEAT (Ge11*) and GETI (Ge12*). Six are of high prevalence (Ge2, Ge3, Ge4, Ge10*, Ge11*, Ge12*) and five of low prevalence (Wb, Ls(a), An(a), Dh(a) and Ge9). Gerbich antigen Glycophorin C and D encode the Gerbich (Ge) antigens. There are four alleles, Ge-1 to Ge-4. Three types of Ge antigen negativity are known: Ge-1,-2,-3 (Leach phenotype), Ge-2,-3 and Ge-2,+3. A 3.4 kilobase pair deletion within the gene, which probably arose because of unequal crossing over between the two repeated domains, is responsible for the formation of the Ge-2,-3 genotype. The breakpoints of the deletion are located within introns 2 and 3 and results in the deletion of exon 3. This mutant gene is transcribed as a messenger RNA with a continuous open reading frame extending over 300 nucleotides and is translated into the sialoglycoprotein found on Ge-2,-3 red cells. A second 3.4 kilobase pair deletion within the glycophorin C gene eliminates only exon 2 by a similar mechanism and generates the mutant gene encoding for the abnormal glycoprotein found on Ge-2,+3 erythrocytes. The Ge2 epitope is antigenic only on glycophorin D and is a cryptic antigen in glycophorin C. It is located within exon 2 and is sensitive to trypsin and papain but resistant to chymotrypsin and pronase. The Ge3 epitope is encoded by exon 3. It is sensitive to trypsin but resistant to chymotrypsin, papain and pronase. It is thought to lie in the between amino acids 42-50 in glycophorin C (residues 21-49 in glycophorin D). Ge4 is located within the first 21 amino acids of glycophorin C. It is sensitive to trypsin, papain, pronase and neuraminidase. Leach antigen The relatively rare Leach phenotype is due either to a deletion in exons 3 and 4 or to a frameshift mutation causing a premature stop codon in the glycophorin C gene, and persons with this phenotype are less susceptible (~60% of the control rate) to invasion by Plasmodium falciparum. Such individuals have a subtype of a condition called hereditary elliptocytosis. The abnormally shaped cells are known as elliptocytes or cameloid cells. The basis for this phenotype was first reported by Telen et al. The phenotype is Ge:-2,-3,-4. Yussef antigen The Yussef (Yus) phenotype is due to a 57 base pair deletion corresponding to exon 2. The antigen is known as GPC Yus. Glycophorin C mutations are rare in most of the Western world, but are more common in some places where malaria is endemic. In Melanesia a greater percentage of the population is Gerbich negative (46.5%) than in any other part of the world. The incidence of Gerbich-negative phenotype caused by an exon 3 deletion in the Wosera (East Sepik Province) and Liksul (Madang Province) populations of Papua New Guinea is 0.463 and 0.176 respectively. Webb antigen The rare Webb (Wb) antigen (~1/1000 donors), originally described in 1963 in Australia, is the result of an alteration in glycosylation of glycophorin C: an A to G transition at nucleotide 23 results in an asparagine residue instead of the normal serine residue with the resultant loss of glycosylation. The antigen is known as GPC Wb. Duch antigen The rare Duch (Dh) antigen was discovered in Aarhus, Denmark (1968) and is also found on glycophorin C. It is due to a C to T transition at nucleotide 40 resulting in the replacement of leucine by phenylalanine. This antigen is sensitive to trypsin but resistant to chymotrypsin and Endo F. Lewis antigen The Lewis II (Ls(a); Ge-6) antigen has insert of 84 nucleotides into the ancestral GPC gene: the insert corresponds to the entire sequence of exon 3. Two subtypes of this antigen are known: beta Ls(a) which carries the Ge3 epitope and gamma Ls(a) which carries both the Ge2 and Ge3 epitopes. This antigen is also known as the Rs(a) antigen. Ahonen antigen The Ahonen (Ana) antigen was first reported in 1972. The antigen is found on glycophorin D. This antigen was discovered in a Finnish man on May 5, 1968, during post operative blood cross matching for an aortic aneurism repair. In Finland the incidence of this antigen was found to be 6/10,000 donors. In Sweden the incidence was 2/3266 donors. The molecular basis for the origin of this antigen lies within exon 2 where a G->T substitution in codon 67 (base position 199) converts an alanine to a serine residue. While this epitope exists within glycophorin C there it is a cryptantigen. It is only antigenic in glycophorin D because of the truncated N terminus. Others A duplicated exon 2 has erythrocytes also been reported in Japanese blood donors (~2/10,000). This mutation has not been associated with a new antigen. Antibodies Antibodies to the Gerbich antigens have been associated with transfusion reactions and mild hemolytic disease of the newborn. In other studies naturally occurring anti-Ge antibodies have been found and appear to be of no clinical significance. Immunological tolerance towards Ge antigen has been suggested. Other areas High expression of glycophorin C has been associated with a poor prognosis for acute lymphoblastic leukaemia in Chinese populations. Glycophorin C is the receptor for the protein erythrocyte binding antigen 140 (EBA140) of Plasmodium falciparum. This interaction mediates a principal invasion pathway into the erythrocytes. The partial resistance of erythrocytes lacking this protein to invasion by P. falciparum was first noted in 1982. The lack of Gerbich antigens in the population of Papua New Guinea was noted in 1989. Influenza A and B bind to glycophorin C. References External links Erythrocyte membrane cartoon Clusters of differentiation Glycoproteins Transmembrane receptors Blood antigen systems Transfusion medicine
Glycophorin C
[ "Chemistry" ]
3,378
[ "Transmembrane receptors", "Glycobiology", "Glycoproteins", "Signal transduction" ]
979,564
https://en.wikipedia.org/wiki/Near-infrared%20spectroscopy
Near-infrared spectroscopy (NIRS) is a spectroscopic method that uses the near-infrared region of the electromagnetic spectrum (from 780 nm to 2500 nm). Typical applications include medical and physiological diagnostics and research including blood sugar, pulse oximetry, functional neuroimaging, sports medicine, elite sports training, ergonomics, rehabilitation, neonatal research, brain computer interface, urology (bladder contraction), and neurology (neurovascular coupling). There are also applications in other areas as well such as pharmaceutical, food and agrochemical quality control, atmospheric chemistry, combustion research and knowledge. Theory Near-infrared spectroscopy is based on molecular overtone and combination vibrations. Overtones and combinations exhibit lower intensity compared to the fundamental, as a result, the molar absorptivity in the near-IR region is typically quite small. (NIR absorption bands are typically 10–100 times weaker than the corresponding fundamental mid-IR absorption band.) The lower absorption allows NIR radiation to penetrate much further into a sample than mid infrared radiation. Near-infrared spectroscopy is, therefore, not a particularly sensitive technique, but it can be very useful in probing bulk material with little to no sample preparation. The molecular overtone and combination bands seen in the near-IR are typically very broad, leading to complex spectra; it can be difficult to assign specific features to specific chemical components. Multivariate (multiple variables) calibration techniques (e.g., principal components analysis, partial least squares, or artificial neural networks) are often employed to extract the desired chemical information. Careful development of a set of calibration samples and application of multivariate calibration techniques is essential for near-infrared analytical methods. History The discovery of near-infrared energy is ascribed to William Herschel in the 19th century, but the first industrial application began in the 1950s. In the first applications, NIRS was used only as an add-on unit to other optical devices that used other wavelengths such as ultraviolet (UV), visible (Vis), or mid-infrared (MIR) spectrometers. In the 1980s, a single-unit, stand-alone NIRS system was made available. In the 1980s, Karl Norris (while working at the USDA Instrumentation Research Laboratory, Beltsville, USA) pioneered the use NIR spectroscopy for quality assessments of agricultural products. Since then, use has expanded from food and agricultural to chemical, polymer, and petroleum industries; pharmaceutical industry; biomedical sciences; and environmental analysis. With the introduction of light-fiber optics in the mid-1980s and the monochromator-detector developments in the early 1990s, NIRS became a more powerful tool for scientific research. The method has been used in a number of fields of science including physics, physiology, or medicine. It is only in the last few decades that NIRS began to be used as a medical tool for monitoring patients, with the first clinical application of so-called fNIRS in 1994. Instrumentation Instrumentation for near-IR (NIR) spectroscopy is similar to instruments for the UV-visible and mid-IR ranges. There is a source, a detector, and a dispersive element (such as a prism, or, more commonly, a diffraction grating) to allow the intensity at different wavelengths to be recorded. Fourier transform NIR instruments using an interferometer are also common, especially for wavelengths above ~1000 nm. Depending on the sample, the spectrum can be measured in either reflection or transmission. Common incandescent or quartz halogen light bulbs are most often used as broadband sources of near-infrared radiation for analytical applications. Light-emitting diodes (LEDs) can also be used. For high precision spectroscopy, wavelength-scanned lasers and frequency combs have recently become powerful sources, albeit with sometimes longer acquisition timescales. When lasers are used, a single detector without any dispersive elements might be sufficient. The type of detector used depends primarily on the range of wavelengths to be measured. Silicon-based CCDs are suitable for the shorter end of the NIR range, but are not sufficiently sensitive over most of the range (over 1000 nm). InGaAs and PbS devices are more suitable and have higher quantum efficiency for wavelengths above 1100 nm. It is possible to combine silicon-based and InGaAs detectors in the same instrument. Such instruments can record both UV-visible and NIR spectra 'simultaneously'. Instruments intended for chemical imaging in the NIR may use a 2D array detector with an acousto-optic tunable filter. Multiple images may be recorded sequentially at different narrow wavelength bands. Many commercial instruments for UV/vis spectroscopy are capable of recording spectra in the NIR range (to perhaps ~900 nm). In the same way, the range of some mid-IR instruments may extend into the NIR. In these instruments, the detector used for the NIR wavelengths is often the same detector used for the instrument's "main" range of interest. NIRS as an analytical technique The use of NIR as an analytical technique did not come from extending the use of mid-IR into the near-IR range, but developed independently. A striking way this was exhibited is that, while mid-IR spectroscopists use wavenumbers (cm−1) when displaying spectra, NIR spectroscopists used wavelength (nm), as is used in ultraviolet–visible spectroscopy. The early practitioners of IR spectroscopy, who depended on assignment of absorption bands to specific bond types, were frustrated by the complexity of the region. However, as a quantitative tool, the lower molar absorption levels in the region tended to keep absorption maxima "on-scale", enabling quantitative work with little sample preparation. The techniques applied to extract the quantitative information from these complex spectra were unfamiliar to analytical chemists, and the technique was viewed with suspicion in academia. Generally, a quantitative NIR analysis is accomplished by selecting a group of calibration samples, for which the concentration of the analyte of interest has been determined by a reference method, and finding a correlation between various spectral features and those concentrations using a chemometric tool. The calibration is then validated by using it to predict the analyte values for samples in a validation set, whose values have been determined by the reference method but have not been included in the calibration. A validated calibration is then used to predict the values of samples. The complexity of the spectra are overcome by the use of multivariate calibration. The two tools most often used a multi-wavelength linear regression and partial least squares. Applications Typical applications of NIR spectroscopy include the analysis of food products, pharmaceuticals, combustion products, and a major branch of astronomical spectroscopy. Astronomical spectroscopy Near-infrared spectroscopy is used in astronomy for studying the atmospheres of cool stars where molecules can form. The vibrational and rotational signatures of molecules such as titanium oxide, cyanide, and carbon monoxide can be seen in this wavelength range and can give a clue towards the star's spectral type. It is also used for studying molecules in other astronomical contexts, such as in molecular clouds where new stars are formed. The astronomical phenomenon known as reddening means that near-infrared wavelengths are less affected by dust in the interstellar medium, such that regions inaccessible by optical spectroscopy can be studied in the near-infrared. Since dust and gas are strongly associated, these dusty regions are exactly those where infrared spectroscopy is most useful. The near-infrared spectra of very young stars provide important information about their ages and masses, which is important for understanding star formation in general. Astronomical spectrographs have also been developed for the detection of exoplanets using the Doppler shift of the parent star due to the radial velocity of the planet around the star. Agriculture Near-infrared spectroscopy is widely applied in agriculture for determining the quality of forages, grains, and grain products, oilseeds, coffee, tea, spices, fruits, vegetables, sugarcane, beverages, fats, and oils, dairy products, eggs, meat, and other agricultural products. It is widely used to quantify the composition of agricultural products because it meets the criteria of being accurate, reliable, rapid, non-destructive, and inexpensive. Abeni and Bergoglio 2001 apply NIRS to chicken breeding as the assay method for characteristics of fat composition. Remote monitoring Techniques have been developed for NIR spectroscopic imaging. Hyperspectral imaging has been applied for a wide range of uses, including the remote investigation of plants and soils. Data can be collected from instruments on airplanes, satellites or unmanned aerial systems to assess ground cover and soil chemistry. Remote monitoring or remote sensing from the NIR spectroscopic region can also be used to study the atmosphere. For example, measurements of atmospheric gases are made from NIR spectra measured by the OCO-2, GOSAT, and the TCCON. Materials science Techniques have been developed for NIR spectroscopy of microscopic sample areas for film thickness measurements, research into the optical characteristics of nanoparticles and optical coatings for the telecommunications industry. Medical uses The application of NIRS in medicine centres on its ability to provide information about the oxygen saturation of haemoglobin within the microcirculation. Broadly speaking, it can be used to assess oxygenation and microvascular function in the brain (cerebral NIRS) or in the peripheral tissues (peripheral NIRS). Cerebral NIRS When a specific area of the brain is activated, the localized blood volume in that area changes quickly. Optical imaging can measure the location and activity of specific regions of the brain by continuously monitoring blood hemoglobin levels through the determination of optical absorption coefficients. NIRS can be used as a quick screening tool for possible intracranial bleeding cases by placing the scanner on four locations on the head. In non-injured patients the brain absorbs the NIR light evenly. When there is an internal bleeding from an injury, the blood may be concentrated in one location causing the NIR light to be absorbed more than other locations, which the scanner detects. So-called functional NIRS can be used for non-invasive assessment of brain function through the intact skull in human subjects by detecting changes in blood hemoglobin concentrations associated with neural activity, e.g., in branches of cognitive psychology as a partial replacement for fMRI techniques. NIRS can be used on infants, and NIRS is much more portable than fMRI machines, even wireless instrumentation is available, which enables investigations in freely moving subjects. However, NIRS cannot fully replace fMRI because it can only be used to scan cortical tissue, whereas fMRI can be used to measure activation throughout the brain. Special public domain statistical toolboxes for analysis of stand alone and combined NIRS/MRI measurement have been developed. The application in functional mapping of the human cortex is called functional NIRS (fNIRS) or diffuse optical tomography (DOT). The term diffuse optical tomography is used for three-dimensional NIRS. The terms NIRS, NIRI, and DOT are often used interchangeably, but they have some distinctions. The most important difference between NIRS and DOT/NIRI is that DOT/NIRI is used mainly to detect changes in optical properties of tissue simultaneously from multiple measurement points and display the results in the form of a map or image over a specific area, whereas NIRS provides quantitative data in absolute terms on up to a few specific points. The latter is also used to investigate other tissues such as, e.g., muscle, breast and tumors. NIRS can be used to quantify blood flow, blood volume, oxygen consumption, reoxygenation rates and muscle recovery time in muscle. By employing several wavelengths and time resolved (frequency or time domain) and/or spatially resolved methods blood flow, volume and absolute tissue saturation ( or Tissue Saturation Index (TSI)) can be quantified. Applications of oximetry by NIRS methods include neuroscience, ergonomics, rehabilitation, brain-computer interface, urology, the detection of illnesses that affect the blood circulation (e.g., peripheral vascular disease), the detection and assessment of breast tumors, and the optimization of training in sports medicine. The use of NIRS in conjunction with a bolus injection of indocyanine green (ICG) has been used to measure cerebral blood flow and cerebral metabolic rate of oxygen consumption (CMRO2). It has also been shown that CMRO2 can be calculated with combined NIRS/MRI measurements. Additionally metabolism can be interrogated by resolving an additional mitochondrial chromophore, cytochrome-c-oxidase, using broadband NIRS. NIRS is starting to be used in pediatric critical care, to help manage patients following cardiac surgery. Indeed, NIRS is able to measure venous oxygen saturation (SVO2), which is determined by the cardiac output, as well as other parameters (FiO2, hemoglobin, oxygen uptake). Therefore, examining the NIRS provides critical care physicians with an estimate of the cardiac output. NIRS is favoured by patients, because it is non-invasive, painless, and does not require ionizing radiation. Optical coherence tomography (OCT) is another NIR medical imaging technique capable of 3D imaging with high resolution on par with low-power microscopy. Using optical coherence to measure photon pathlength allows OCT to build images of live tissue and clear examinations of tissue morphology. Due to technique differences OCT is limited to imaging 1–2 mm below tissue surfaces, but despite this limitation OCT has become an established medical imaging technique especially for imaging of the retina and anterior segments of the eye, as well as coronaries. A type of neurofeedback, hemoencephalography or HEG, uses NIR technology to measure brain activation, primarily of the frontal lobes, for the purpose of training cerebral activation of that region. The instrumental development of NIRS/NIRI/DOT/OCT has proceeded tremendously during the last years and, in particular, in terms of quantification, imaging and miniaturization. Peripheral NIRS Peripheral microvascular function can be assessed using NIRS. The oxygen saturation of haemoglobin in the tissue (StO2) can provide information about tissue perfusion. A vascular occlusion test (VOT) can be employed to assess microvascular function. Common sites for peripheral NIRS monitoring include the thenar eminence, forearm and calf muscles. Particle measurement NIR is often used in particle sizing in a range of different fields, including studying pharmaceutical and agricultural powders. Industrial uses As opposed to NIRS used in optical topography, general NIRS used in chemical assays does not provide imaging by mapping. For example, a clinical carbon dioxide analyzer requires reference techniques and calibration routines to be able to get accurate CO2 content change. In this case, calibration is performed by adjusting the zero control of the sample being tested after purposefully supplying 0% CO2 or another known amount of CO2 in the sample. Normal compressed gas from distributors contains about 95% O2 and 5% CO2, which can also be used to adjust %CO2 meter reading to be exactly 5% at initial calibration. See also Chemical imaging Fourier transform infrared spectroscopy Fourier transform spectroscopy Functional near-infrared spectroscopy (fNIR/fNIRS) Hyperspectral imaging Infrared spectroscopy Optical imaging Rotational spectroscopy Spectroscopy Terahertz time-domain spectroscopy Vibrational spectroscopy References Further reading Kouli, M.: "Experimental investigations of non invasive measuring of cerebral blood flow in adult human using the near infrared spectroscopy." Dissertation, Technical University of Munich, December 2001. Raghavachari, R., Editor. 2001. Near-Infrared Applications in Biotechnology, Marcel-Dekker, New York, NY. Workman, J.; Weyer, L. 2007. Practical Guide to Interpretive Near-Infrared Spectroscopy, CRC Press-Taylor & Francis Group, Boca Raton, FL. External links NIR Spectroscopy NIR Spectroscopy News Vibrational spectroscopy Infrared technology
Near-infrared spectroscopy
[ "Physics", "Chemistry" ]
3,316
[ "Vibrational spectroscopy", "Spectroscopy", "Spectrum (physical sciences)" ]
979,579
https://en.wikipedia.org/wiki/Duffy%20antigen%20system
Duffy antigen/chemokine receptor (DARC), also known as Fy glycoprotein (FY) or CD234 (Cluster of Differentiation 234), is a protein that in humans is encoded by the ACKR1 gene. The Duffy antigen is located on the surface of red blood cells, and is named after the patient in whom it was discovered. The protein encoded by this gene is a glycosylated membrane protein and a non-specific receptor for several chemokines. The protein is also the receptor for the human malarial parasites Plasmodium vivax, Plasmodium knowlesi and simian malarial parasite Plasmodium cynomolgi. Polymorphisms in this gene are the basis of the Duffy blood group system. History It was noted in the 1920s that black Africans had some intrinsic resistance to malaria, but the basis for this remained unknown. The Duffy antigen gene was the fourth gene associated with the resistance after the genes responsible for sickle cell anaemia, thalassemia and glucose-6-phosphate dehydrogenase. In 1950, the Duffy antigen was discovered in a multiply-transfused hemophiliac named Richard Duffy, whose serum contained the first example of anti-Fya antibody. In 1951, the antibody to a second antigen, Fyb, was discovered in serum. Using these two antibodies, three common phenotypes were defined: Fy(a+b+), Fy(a+b-), and Fy(a-b+). Several other types were later discovered bringing the current total up to 6: Fya, Fyb, Fy3, Fy4, Fy5 and Fy6. Only Fya, Fyb and Fy3 are considered clinically important. Reactions to Fy5 have also rarely been reported. The Fy4 antigen, originally described on Fy (a–b–) RBCs, is now thought to be a distinct, unrelated antigen and is no longer included in the FY system. Genetics and genomics The Duffy antigen/chemokine receptor gene (gp-Fy; CD234) is located on the long arm of chromosome 1 (1.q22-1.q23) and was cloned in 1993. The gene was first localised to chromosome 1 in 1968, and was the first blood system antigen to be localised. It is a single copy gene spanning over 1500 bases and is in two exons. The gene encodes a 336 amino acid acidic glycoprotein. It carries the antigenic determinants of the Duffy blood group system which consist of four codominant alleles—FY*A and FY*B—coding for the Fy-a and Fy-b antigens respectively, FY*X and FY*Fy, five phenotypes (Fy-a, Fy-b, Fy-o, Fy-x and Fy-y) and five antigens. Fy-x is a form of Fy-b where the Fy-b gene is poorly expressed. Fy-x is also known as Fy-bweak or Fy-bWk. This gene has been redesignated ACKR1. Fy-a and Fy-b differ by in a single amino acid at position 42: glycine in Fy-a and aspartic acid in Fy-b (guanine in Fy-a and adenosine in Fy-b at position 125). A second mutation causing a Duffy negative phenotype is known: the responsible mutation is G -> A at position 298. The genetic basis for the Fy(a-b-) phenotype is a point mutation in the erythroid specific promoter (a T -> C mutation at position -33 in the GATA box). This mutation occurs in the Fy-b allele and has been designated Fy-bEs (erythroid silent). Two isotypes have been identified. The Fy-x allele is characterized by a weak anti-Fy-b reaction and appears to be the result of two separate transitions: Cytosine265Threonine (Arginine89Cysteine) and Guanine298Adenosine (Alanine100Threonine). A third mutation (a transversion) in this gene has also been described - G145T (Alanine49Serine) - that has been associated with the Fy-x phenotype. Most Duffy negative black people carry a silent Fy-b allele with a single T to C substitution at nucleotide -33, impairing the promoter activity in erythroid cells by disrupting a binding site for the GATA1 erythroid transcription factor. The gene is still transcribed in non erythroid cells in the presence of this mutation. The Duffy negative phenotype occurs at low frequency among whites (~3.5%) and is due to a third mutation that results in an unstable protein (Arg89Cys: cytosine -> thymidine at position 265). The silent allele has evolved at least twice in the black population of Africa and evidence for selection for this allele has been found. The selection pressure involved here appears to be more complex than many text books might suggest. An independent evolution of this phenotype occurred in Papua New Guinea has also been documented. A comparative study of this gene in seven mammalian species revealed significant differences between species. The species examined included Pan troglodytes (chimpanzee), Macaca mulatta (rhesus monkey), Pongo pygmaeus (orangutan), Rattus norvegicus (brown rat), Mus musculus (mouse), Monodelphis domestica (opossum), Bos taurus (cow) and Canis familiaris (dog). Three exons are present in humans and chimpanzees, whereas only two exons occur in the other species. This additional exon is located at the 5' end and is entirely non coding. Both intron and exon size vary considerably between the species examined. Between the chimpanzee and the human, 24 differences in the nucleotide sequence were noted. Of these 18 occurred in non coding regions. Of the remaining 6, 3 were synonymous and 3 non synonymous mutations. The significance of these mutations if any is not known. The mouse ortholog has been cloned and exhibits 63% homology to the human gene at the amino acid level. The mouse gene is located on chromosome 1 between the genetic markers Xmv41 and D1Mit166. The mouse gene has two exons (100 and 1064 nucleotides in length), separated by a 461 base pair intron. In the mouse DARC is expressed during embryonic development between days 9.5 and 12. In yellow baboons (Papio cynocephalus) mutations in this gene have been associated with protection from infection with species of the genus Hepatocystis. The ancestral form of extant DARC alleles in humans appears to be the FY*B allele. The gene appears to be under strong purifying selection. The cause of this selective pressure has not yet been identified. Molecular biology Biochemical analysis of the Duffy antigen has shown that it has a high content of α-helical secondary structure - typical of chemokine receptors. Its N-glycans are mostly of the triantennary complex type terminated with α2-3- and α2-6-linked sialic acid residues with bisecting GlcNAc and α1-6-linked fucose at the core. The Duffy antigen is expressed in greater quantities on reticulocytes than on mature erythrocytes. While the Duffy antigen is expressed on bone marrow erythroblasts and circulating erythrocytes it is also found on Purkinje cells of the cerebellum, endothelial cells of thyroid capillaries, the post-capillary venules of some organs including the spleen, liver and kidney and the large pulmonary venules. Duffy antigen has then a very unique cell expression profile in cerebellar neurons, venular endothelial cells and erythroid cells. In some people who lack the Duffy antigen on their erythrocytes it is still expressed in the other cell types. It has two potential N-linked glycosylation sites at asparagine (Asn) 16 and Asn27. The Duffy antigen has been found to act as a multispecific receptor for chemokines of both the C-C and C-X-C families, including: monocyte chemotactic protein-1 (MCP-1) - CCL2 regulated upon activation normal T expressed and secreted (RANTES) - CCL5 melanoma growth stimulatory activity (MSGA-α), KC, neutrophil-activating protein 3 (NAP-3) - CXCL1/CXCL2 and the angiogenic CXC chemokines: Growth related gene alpha (GRO-α) - CXCL1 Platelet factor 4 - CXCL4 ENA-78 - CXCL5 Neutrophil activating peptide-2 (NAP-2) - CXCL7 Interleukin-8 (IL-8) - CXCL8 Consequently, the Fy protein is also known as DARC (Duffy Antigen Receptor for Chemokines). The chemokine binding site on the receptor appears to be localised to the amino terminus. The antigen is predicted to have 7 transmembrane domains, an exocellular N-terminal domain and an endocellular C-terminal domain. Alignment with other seven transmembrane G-protein-coupled receptors shows that DARC lacks the highly conserved DRY motif in the second intracellular loop of the protein that is known to be associated with G-protein signaling. Consistent with this finding ligand binding by DARC does not induce G-protein coupled signal transduction nor a Ca2+ flux unlike other chemokine receptors. Based on these alignments the Duffy antigen is considered to be most similar to the interleukin-8B receptors. Scatchard analysis of competition binding studies has shown high affinity binding to the Duffy antigen with dissociation constants (KD) binding values of 24 ± 4.9, 20 ± 4.7, 41.9 ± 12.8, and 33.9 ± 7 nanoMoles for MGSA, interleukin-8, RANTES and monocyte chemotactic peptide-1 respectively. In DARC-transfected cells, DARC is internalized following ligand binding and this led to the hypothesis that expression of DARC on the surface of erythrocytes, endothelial, neuronal cells and epithelial cells may act as a sponge and provide a mechanism by which inflammatory chemokines may be removed from circulation as well as their concentration modified in the local environment. This hypothesis has also been questioned after knock out mice were created. These animals appeared healthy and had normal responses to infection. While the function of the Duffy antigen remains presently (2006) unknown, evidence is accumulating that suggests a role in neutrophil migration from the blood into the tissues and in modulating the inflammatory response. The protein is also known to interact with the protein KAI1 (CD82) a surface glycoprotein of leukocytes and may have a role in the control of cancer. The Duffy antigen has been shown to exist as a constitutive homo-oligomer and that it hetero-oligomerizes with the CC chemokine receptor CCR5 (CD195). The formation of this heterodimer impairs chemotaxis and calcium flux through CCR5, whereas internalization of CCR5 in response to ligand binding remains unchanged. DARC has been shown to internalise chemokines but does not scavenge them. It mediates chemokine transcytosis, which leads to apical retention of intact chemokines and more leukocyte migration. Binding melanoma growth-stimulating activity inhibits the binding of P. knowlesi to DARC. Population genetics Differences in the racial distribution of the Duffy antigens were discovered in 1954, when it was found that the overwhelming majority of people of African descent had the erythrocyte phenotype Fy(a-b-): 68% in African Americans and 88-100% in African people (including more than 90% of West African people). This phenotype is exceedingly rare in Whites. Because the Duffy antigen is uncommon in those of Black African descent, the presence of this antigen has been used to detect genetic admixture. In a sample of unrelated African Americans (n = 235), Afro-Caribbeans (n = 90) and Colombians (n = 93), the frequency of the -46T (Duffy positive) allele was 21.7%, 12.2% and 74.7% respectively. Overall the frequencies of Fya and Fyb antigens in Whites are 66% and 83% respectively, in Asians 99% and 18.5% respectively and in blacks 10% and 23% respectively. The frequency of Fy3 is 100% Whites, 99.9% Asians and 32% Blacks. Phenotype frequencies are: Fy(a+b+): 49% Whites, 1% Blacks, 9% Chinese Fy(a-b+): 34% Whites, 22% Blacks, <1% Chinese Fy(a+b-): 17% Whites, 9% Blacks, 91% Chinese While a possible role in the protection of humans from malaria had been previously suggested, this was only confirmed clinically in 1976. Since then many surveys have been carried out to elucidate the prevalence of Duffy antigen alleles in different populations including: The mutation Ala100Thr (G -> A in the first codon position—base number 298) within the FY*B allele was thought to be purely a White genotype, but has since been described in Brazilians. However, the study's authors point out that the Brazilian population has arisen from intermarriage between Portuguese, Black Africans, and Indians, which accounts for the presence of this mutation in a few members of Brazil's non-White groups. Two of the three Afro-Brazilian test subjects that were found to have the mutation (out of a total of 25 Afro-Brazilians tested) were also related to one another, as one was a mother and the other her daughter. This antigen along with other blood group antigens was used to identify the Basque people as a genetically separate group. Its use in forensic science is under consideration. The Andaman and Nicobar Islands, part of India, were originally inhabited by 14 aboriginal tribes. Several of these have gone extinct. One surviving tribe—the Jarawas—live in three jungle areas of South Andaman and one jungle area in Middle Andaman. The area is endemic for malaria. The causative species is Plasmodium falciparum: there is no evidence for the presence of Plasmodium vivax. Blood grouping revealed an absence of both Fy(a) and Fy(b) antigens in two areas and a low prevalence in two others. In the Yemenite Jews the frequency of the Fy allele is 0.5879. The frequency of this allele varies from 0.1083 to 0.2191 among Jews from the Middle East, North Africa and Southern Europe. The incidence of Fya among Ashkenazi Jews is 0.44 and among the non-Ashkenazi Jews it is 0.33. The incidence of Fyb is higher in both groups with frequencies of 0.53 and 0.64 respectively. In the Chinese ethnic populations—the Han and the She people—the frequencies of Fya and Fyb alleles were 0.94 and 0.06 and 0.98 and 0.02 respectively. The frequency of the Fya allele in most Asian populations is ~95%. In Grande Comore (also known as Ngazidja) the frequency of the Fy(a- b-) phenotype is 0.86. The incidence of Fy(a+b-) in northern India among blood donors is 43.85%. In the Maghreb, Horn of Africa and the Nile Valley, the Afroasiatic (Hamitic-Semitic) speaking populations are largely Duffy-positive. Between 70%-98% of Hamito-Semitic groups in Ethiopia were found to be Duffy-positive. Serological and DNA based analysis of 115 unrelated Tunisians also found an FY*X frequency of 0.0174; FY*1 = 0.291 (expressed 0.260, silent 0.031); FY*2 = 0.709 (expressed 0.427; silent 0.282). Since the FY*2 silent is the most common allele in West Africa, its minor occurrence in the sample probably represents recent diffusion from the latter region. In Nouakchott, Mauritania overall 27% of the population are Duffy-positive. 54% of Moors are Duffy antigen positive, while only 2% of black ethnic groups (mainly Poular, Soninke and Wolof) are Duffy positive. A map of the Duffy antigen distribution has been produced. The most prevalent allele globally is FY*A. Across sub-Saharan Africa the predominant allele is the silent FY*BES variant. In Iran the Fy (a-b-) phenotype was found in 3.4%. There appears to have been a selective sweep in Africa which reduced the incidence of this antigen there. This sweep appears to have occurred between 6,500 and 97,200 years ago (95% confidence interval) The distribution within India has been studied in some detail. Clinical significance Historically the role of this antigen other than its importance as a receptor for Plasmodium protozoa has not been appreciated. Recent work has identified a number of additional roles for this protein. Malaria On erythrocytes, the Duffy antigen acts as a receptor for invasion by the human malarial parasites P. vivax and P. knowlesi. This was first shown in 1980. Duffy negative individuals whose erythrocytes do not express the receptor are believed to be resistant to merozoite invasion although P. vivax infection has been reported in Duffy negative children in Kenya, suggesting a role in resistance to disease, not infection. This antigen may also play a role in erythrocyte invasion in the rodent malarial parasite P. yoelii. The epitope Fy6 is required for P. vivax invasion. The protection to P. vivax malaria conferred by the absence of the Duffy antigen appears to be very limited at best in Madagascar. Although 72% of the population are Duffy antigen negative, 8.8% of the Duffy antigen negative individuals were asymptomatic carriers of P. vivax. Malaria has also been found in Angola and Equatorial Guinea in Duffy negative individuals. P. vivax malaria in a Duffy antigen negative individual in Mauritania has also been reported. Similar infections have been reported in Brazil and Kenya. Additional cases of infection in Duffy antigen negative individuals have been reported from the Congo and Uganda. A study in Brazil of the protection against P. vivax offered by the lack of the Duffy antigen found no differential resistance to malaria vivax between Duffy antigen positive and negative individuals. Nancy Ma's night monkey (A. nancymaae) is used as an animal model of P. vivax infection. This species' erythrocytes possess the Duffy antigen and this antigen is used as the receptor for P. vivax on the erythrocytes in this species. Examination of this gene in 497 patients in the Amazonas State, Brazil, made by the doctor Sérgio Albuquerque, suggests that the genotypes FY*A/FY*B-33 and FY*B/FY*B-33 (where -33 refers to the null mutation at position -33 in the GATA box) may have an advantage over the genotypes FY*A/FY*B and FY*A/FY*A, FY*A/FY*B, FY*A/FY*X and FY*B/FY*X. FY*A/FY*B and FY*A/FY*A genotypes showed to be associated with increased rates of P. vivax infection and FY*B/FY*X and FY*A/FY*X were shown to be associated with the low levels of parasitism. A difference between the susceptibility to Plasmodium vivax malaria has been reported. Erythrocytes expressing Fya had 41-50% lower binding of P. vivax compared with Fyb cells. Individuals with the Fy(a+b-) phenotype have a 30-80% reduced risk of clinical vivax but not falciparum malaria. The binding of platelet factor 4 (CXCL4) appears to be critical for the platelet induced killing of P. falciparum. The Duffy antigen binding protein in P. vivax'' is composed of three subdomains and is thought to function as a dimer. The critical DARC binding residues are concentrated at the dimer interface and along a relatively flat surface spanning portions of two subdomains. A study in Brazil confirmed the protective effect of FY*A/FY*O against malaria. In contrast the genotype FY*B/FY*O was associated with a greater risk. Asthma Asthma is more common and tends to be more severe in those of African descent. There appears to be a correlation with both total IgE levels and asthma and mutations in the Duffy antigen. Hematopoiesis Duffy antigen plays a fundamental role on hematopoiesis. Indeed, nucleated red blood cells present in the bone marrow have high expression of DARC, which facilitates their direct contact with hematopoietic stem cells. The absence of erythroid DARC alters hematopoiesis including stem and progenitor cells, which ultimately gives rise to phenotypically distinct neutrophils. As a result, mature neutrophils of Duffy-negative individuals carry more molecular "weapons" against infectious pathogens. Therefore, alternative physiological patterns of hematopoiesis and bone marrow cell outputs depend on the expression of DARC in the erythroid lineage. Benign ethnic neutropenia Individuals with the Duffy-null genotype have a persistently lower neutrophil count than the typical laboratory normal range, but the lower amount of circulating neutrophils associated with this genotype does not seem to confer an increased risk of infection. Clinical use of the term "benign ethnic neutropenic" to describe this phenomenon remains widespread, but the term is problematic as the Duffy-null genotype is common in individuals with African and certain Middle Eastern ancestries, and the term implies that individuals with European ancestry have the normal reference neutrophil count. The term "Duffy-null associated neutrophil count" (DANC) has been proposed as a replacement. The distinctive neutrophils that are formed in the absence of DARC on erythroid lineage (see above - role of DARC on hematopoiesis) readily leave the blood stream, which explains the apparent lower numbers of neutrophils in the blood of Duffy-null individuals. Failure to recognize that individuals with African ancestry often have healthy Duffy-null antigen-associated neutrophil counts instead of neutropenia has historically contributed to inequity in access to medications that require blood monitoring due to risk of neutropenia, including chemotherapy and the antipsychotic medication clozapine. The lower number of circulating neutrophils can cause individuals with the Duffy-null genotype to fall below what typically would be considered safe to continue these treatments, despite new data showing that neutrophil functioning is preserved in these individuals. Cancer Interactions between the metastasis suppressor KAI1 on tumor cells and the cytokine receptor DARC on adjacent vascular cells suppresses tumor metastasis. In human breast cancer samples low expression of the DARC protein is significantly associated with estrogen receptor status, both lymph node and distant metastasis and poor survival. Endotoxin response The procoagulant response to lipopolysaccharide (bacterial endotoxin) is reduced in Duffy antigen negative Africans compared with Duffy positive Whites. This difference is likely to involve additional genes. HIV infection A connection has been found between HIV susceptibility and the expression of the Duffy antigen. The absence of the DARC receptor appears to increase the susceptibility to infection by HIV. However once established, the absence of the DARC receptor appears to slow down the progression of the disease. HIV-1 appears to be able to attach to erythrocytes via DARC. The association between the Duffy antigen and HIV infection appears to be complex. Leukopenia (a low total white cell count) is associated with relatively poor survival in HIV infection and this association is more marked in whites than in people of Black African descent, despite the (on average) lower white cell counts found in black Africans. This difference appears to correlate with a particular genotype (-46C/C) associated with the absence of the Duffy antigen. This genotype has only been found in black Africans and their descendants. The strength of this association increases inversely with the total white cell count. The basis for this association is probably related to the role of the Duffy antigen in cytokine binding but this has yet to be verified. A study of 142 black South African high-risk female sex workers over 2 years revealed a seroconversion rate of 19.0%. Risk of seroconversion appeared to be correlated with Duffy-null-associated low neutrophil counts. Inflammation An association with the levels monocyte chemoattractant protein-1 has been reported. In the Sardinian population, an association of several variants in the DARC gene (coding and non-coding) correlates with increased serum levels of monocyte chemoattractant protein (MCP -1). A new variant in this population, consisting of the amino acid substitution of arginine for a cysteine at position 89 of the protein diminishes the ability to bind chemokines. DARC has also been linked to rheumatoid arthritis (RA), possibly displaying chemokines such as CXCL5 on the surface of endothelial cells within the synovium, increasing the recruitment of neutrophils in the disease state. Lung transplantation The Duffy antigen has been implicated in lung transplantation rejection. Multiple myeloma An increased incidence of Duffy antigen has been reported in patients with multiple myeloma compared with healthy controls. Pneumonia The Duffy antigen is present in the normal pulmonary vascular bed. Its expression is increased in the vascular beds and alveolar septa of the lung parenchyma during suppurative pneumonia. Pregnancy Duffy antigen has been implicated in haemolytic disease of the newborn. Prostate cancer Experimental work has suggested that DARC expression inhibits prostate tumor growth. Men of black African descent are at greater risk of prostate cancer than are men of either European or Asian descendant (60% greater incidence and double the mortality compared to Whites). However, the contribution of DARC to this increased risk has been tested in Jamaican males of black African descent. It was found that none of the increased risk could be attributed to the DARC gene. The reason for this increased risk is as yet unknown. Renal transplantion Antibodies and a cellular response to the Duffy antigen have been associated with renal transplant rejection. Sickle cell anaemia Duffy antigen-negative individuals with sickle cell anaemia tend to sustain more severe organ damage than do those with the Duffy antigen. Duffy-positive patients exhibit higher counts of white blood cells, polynuclear neutrophils, higher plasma levels of IL-8 and RANTES than Duffy-negative patients. Southeast Asian ovalocytosis There is a ~10% increase in Fy expression in Southeast Asian ovalocytosis erythrocytes. Transfusion medicine A Duffy negative blood recipient may have a transfusion reaction if the donor is Duffy positive. Since most Duffy-negative people are of African descent, blood donations from people of black African origin are important to transfusion banks. Transfusion data International Society for Blood Transfusion (ISBT) symbol: FY ISBT number: 008 Gene symbol: FY Gene name: Duffy blood group Number of Duffy antigens: 6 Antibody type Almost entirely IgG. IgG1 usually predominates. IgM does occur but is rare. Antibody behavior Anti-Fya is a common antibody while anti-Fyb is approximately 20 times less common., They are reactive at body temperature and are therefore clinically significant, although they do not typically bind complement. Antibodies are acquired through exposure (pregnancy or history of blood transfusion) and subsequent alloimmunization. They display dosage (react more strongly to homozygous cells versus heterozygous cells). Transfusion reactions Typically mild but may be serious, even fatal. Although these usually occur immediately they may occur after a delay (up to 24 hours). These reactions are usually caused by anti-Fya or anti-Fyb. anti-Fy3 may cause acute or delayed hemolytic transfusion reactions, but only rarely. Anti-Fy5 may also cause delayed hemolytic transfusion reactions. Hemolytic disease of the fetus and newborn Hemolytic disease of the fetus and newborn is typically mild but rarely may be serious. Almost always due to anti-Fya and rarely anti-Fyb or Fy3. References Further reading External links Duffy at BGMUT Blood Group Antigen Gene Mutation Database at NCBI, NIH Duffy gene Population data Clusters of differentiation Immune system Blood antigen systems HIV/AIDS Transfusion medicine
Duffy antigen system
[ "Biology" ]
6,404
[ "Immune system", "Organ systems" ]
979,942
https://en.wikipedia.org/wiki/Timeline%20of%20steam%20power
Steam power developed slowly over a period of several hundred years, progressing through expensive and fairly limited devices in the early 17th century, to useful pumps for mining in 1700, and then to Watt's improved steam engine designs in the late 18th century. It is these later designs, introduced just when the need for practical power was growing due to the Industrial Revolution, that truly made steam power commonplace. Development phases Early examples 1st century AD – Hero of Alexandria describes the Aeolipile, as an example of the power of heated air or water. The device consists of a rotating ball spun by steam jets; it produced little power and had no practical application, but is nevertheless the first known device moved by steam pressure. He also describes a way of transferring water from one vessel to another using pressure. The methods involved filling a bucket, the weight of which worked tackle to open temple doors, which were then closed again by a deadweight once the water in the bucket had been drawn out by a vacuum caused by cooling of the initial vessel. He claims it was built by Pope Sylvester II. Late 15th century AD: Leonardo Da Vinci described the Architonnerre, a steam-powered cannon. Development of a practical steam engine The Newcomen Engine: Steam power in practice Watt's engine Improving power Earlier versions of the steam engine indicator were in use by 1851, though relatively unknown. Steam turbines are made to 1,500 MW (2,000,000 hp) to generate electricity. See also Steam engine Steam power during the Industrial Revolution Maritime timeline Timeline of heat engine technology Notes External links The Growth of the Steam Engine Alternative timeline: If electric generators & motors had preceded steam Steam power Steam power ko:증기 기관#발전 과정
Timeline of steam power
[ "Physics" ]
363
[ "Power (physics)", "Steam power", "Physical quantities" ]
980,055
https://en.wikipedia.org/wiki/Exatron%20Stringy%20Floppy
The Exatron Stringy Floppy (or ESF) is a continuous-loop tape drive developed by Exatron. History The company introduced an S-100 stringy floppy drive at the 1978 West Coast Computer Faire, and a version for the Radio Shack TRS-80 in 1979. Exatron sold about 4,000 TRS-80 drives by August 1981 for $249.50 each, stating that it was "our best seller by far". The tape cartridge is about the size of a business card, but about thick. The magnetic tape inside the cartridge is wide. Format There is no single catalog of files; to load a specific file the drive searches the entire tape, briefly stopping to read the header of each found file. The tape loop only moves in one direction, so a file that starts behind the current location cannot be read until the drive searches the entire loop for it. The device is capable of reading and writing random access data files (unlike a datacassette). If a record being sought has been overshot, the drive advances the tape until it loops around to the beginning and continues seeking from there. According to Embedded Systems magazine, the Exatron Stringy Floppy uses Manchester encoding, achieving 14K read-write speeds and the code controlling the device was developed by Li-Chen Wang, who also wrote a Tiny BASIC, the basis for the TRS-80 Model I Level I BASIC. Reception In the July 1983 issue of Compute!'s Gazette, the Exatron Stringy Floppy for the VIC-20 and the Commodore 64 was reviewed. Calling the peripheral "a viable alternative" to tape or disk, the magazine noted that "under ideal conditions, a Stringy Floppy can outperform a VIC-1540/1541 disk drive". Texas Instruments licensed the Stringy Floppy as the Waferdrive for its cancelled TI 99/2 computer and a Compact Computer 40 peripheral which never shipped. Use and distribution The Exatron drive was initially used in the Prophet-10 music synthesizer and was later replaced with a micro-cassette drive from Braemar, reportedly due to unreliability and poor mutual compatibility of the former. Cartridges, or "wafers", were available in tape lengths ranging from . Known data capacities/tape length are: 4 kB/5 feet, 16 kB/20 feet, 48 kB/50 feet, and 64 kB/75 feet. One complete cycle through a tape takes 55 to 65 seconds, depending on the number of files it contains. See also ZX Microdrive Rotronics Wafadrive References External links Exatron Stringy Floppy as described by Bill Fletcher Getting Files off Stringy Floppy Wafers for use in Emulators Advertisements Exatron Official Website Computer storage devices Home computer peripherals TRS-80 Computer-related introductions in 1979
Exatron Stringy Floppy
[ "Technology" ]
577
[ "Computer storage devices", "Recording devices" ]
980,166
https://en.wikipedia.org/wiki/Air%20vortex%20cannon
An air vortex cannon is a toy that releases doughnut-shaped air vortices — similar to smoke rings but larger, stronger and invisible. The vortices can ruffle hair, disturb papers or blow out candles after travelling several metres. An air vortex cannon can be made easily at home, from just a cardboard box. Air cannons are used in some amusement parks such as Universal Studios to spook or surprise visitors. The Wham-O Air Blaster toy introduced in 1965 could blow out a candle at . The commercial Airzooka was developed by Brian S. Jordan who claims to have conceived it when still a boy. A feature of the Airzooka is a loose non-elastic polythene membrane, tensioned by a bungee cord, rather than elastic membranes. This allows a much greater volume of air to be displaced. A large air vortex cannon, with a wide barrel and a displacement volume of was built in March 2008 at the University of Minnesota, and could blow out candles at . In 2012, a large air vortex cannon was built for Czech Television program Zázraky přírody (). It was capable of bringing down a wall of cardboard boxes from in what was claimed to be a world record. See also Bubble ring Vortex ring gun Bamboo cannon Boga (noisemaker) Potato cannon Big-Bang Cannon References External links Home made vortex cannon using a cardboard box and a smoke machine from The URN Science Show. Toy weapons Vortices
Air vortex cannon
[ "Chemistry", "Mathematics" ]
299
[ "Dynamical systems", "Vortices", "Fluid dynamics" ]
980,240
https://en.wikipedia.org/wiki/Tachistoscope
A tachistoscope is a device that displays a picture, text, or an object for a specific amount of time. It can be used for various purposes such as to increase recognition speed, to show something too fast to be consciously recognized, or to test which elements of a display are memorable. Early tachistoscopes were mechanical, using a flat masking screen that containing a window. The screen concealed the picture or text until the sceen moved, at a known speed, the window over the picture or text, revealing it. The screen continued to move until it hid the picture or text again. Later tachistoscopes used a shutter system typical of a camera in conjunction with a slide or transparency projector. Even later, tachistoscopes used brief illumination, such as from fast-onset and fast-offset fluorescent lamps, of the material to be displayed. By the late 1990s, tachistoscopes had largely been replaced by computers for displaying pictures and text. History The first tachistoscope was originally described by the German physiologist A.W. Volkmann in 1859. Samuel Renshaw used it during World War II in the training of fighter pilots to help them identify aircraft silhouettes as friend or foe. Applications Before computers became universal, tachistoscopes were used extensively in psychological research to present visual stimuli for controlled durations. Some experiments employed pairs of tachistoscopes so that an experimental participant could be given different stimulation in each visual field. Tachistoscopes were used during the late 1960s in public schools as an aid to increased reading comprehension for speed reading. There were two types: the student would look through a lens similar to an aircraft bombsight viewfinder and read letters, words, and phrases using manually advanced slide film. The second type projected words and phrases on a screen in sequence. Both types were followed up with comprehension and vocabulary testing. Tachistoscopes continue to be used in market research, where they are typically used to compare the visual impact, or memorability of marketing materials or packaging designs. Tachistoscopes used for this purpose still typically employ slide projectors rather than computer monitors, due to the increased fidelity of the image which can be displayed in this way and the opportunity to show large or life-size images. References External links https://web.archive.org/web/20220328115006/http://www.sykronix.com/researching/tscope.htm How to Build and Use a Tachistoscope] Photography equipment Optical devices
Tachistoscope
[ "Materials_science", "Engineering" ]
531
[ "Glass engineering and science", "Optical devices" ]
980,251
https://en.wikipedia.org/wiki/Playtest
A playtest is the process by which a game designer tests a new game for bugs and design flaws before releasing it to market. Playtests can be run "open", "closed", "beta", or otherwise, and are very common with board games, collectible card games, puzzle hunts, role-playing games, and video games, for which they have become an established part of the quality control process. An individual involved in testing a game is referred to as a playtester. An open playtest could be considered open to anyone who wishes to join, or it may refer to game designers recruiting testers from outside the design group. Prospective testers usually must complete a survey or provide their contact information in order to be considered for participation. A closed playtest is an internal testing process not available to the public. Beta testing normally refers to the final stages of testing just before going to market with a product, and is often run semi-open with a limited form of the game in order to find any last-minute problems. With all forms of playtesting it is not unusual for participants to be required to sign a non-disclosure agreement, in order to protect the game designer's copyrights. The word 'playtest' is also commonly used in unofficial situations where a game is being tested by a group of players for their own private use, or to denote a situation where a new strategy or game mechanic is being tested. Playtesting is a part of usability test in the process of game development. Video games In the video game industry, playtesting refers specifically to the process of exposing a game in development (or some specific parts of it) to its intended audience, to identify potential design flaws and gather feedback. Playtests are also used to help ensure that a product will be commercially viable upon release, by providing a way for consumers to play the game and provide their opinions. Playtesting should not be confused with quality assurance (QA) testing, in which professional testers look for and report specific software bugs to be fixed by the development team. The first user research employee in the video game industry was Carol Kantor, who was employed by Atari, Inc. in 1976. Prior to this, the company had evaluated their games primarily via coin-collection data, however playtesting became a core method by which Atari evaluated the commercial viability of new games. The Boston Globe described playtesting as "what everyone says is the least favorite part of the game-building operation". Steve Meretzky of Infocom said that "the first part of debugging is exciting; it's the first feedback. Somebody is actually playing your game. But by the end, you get sick of the little problems. You have spent three months inventing the game, and now you have to spend just as much time cleaning it up". The requirements for a person to be considered for participation in a playtest vary. Some playtests are open to anyone willing to volunteer, while others specifically target professional gamers and journalists. Some playtests also try to evaluate the game's appeal to players with different levels of experience by selecting players with varying exposure to the game's genre. An example of a video game that made extensive use of open playtesting is Minecraft, which was made available for purchase in its pre-alpha stages. This both helped to financially support the game and provide feedback and bug testing during its early stages. Playtesting began even before the game features included multiplayer or the ability to save games. Mojang continues to make use of playtesting with Minecraft through weekly development releases, allowing players to experiment with unfinished additions to the game and provide feedback on them. Some games make use of playtesting with only part of their content, leaving other important sections unexposed to the public. StarCraft II: Heart of the Swarm was tested in this manner; its playtest only included the multiplayer portion of the game, while the single-player campaign was not revealed. Heart of the Swarm is also an example of a playtest where average players are not being considered for entry; the initial wave of testers are only being selected from the ranks of professional SCII gamers and from the media. The open-source video game engine remake OpenRA, which recreates the early Command & Conquer games, publishes playtests to the public during the release process so that a broader range of testers can verify that new features don't introduce critical errors such as desync problems in the lockstep protocol and unwanted side effects on the gameplay can be balanced out prior to the next stable release. Team Fortress 2 uses a method of playtesting whereby players that purchased the game can participate in an open beta. The beta is nearly identical to the actual game itself, but includes items that are on their way to being released in the full game. The purpose of this beta is to test those items before their release, to ensure that they are balanced and fair; in this way, the game is constantly being playtested despite the fact that it has been released. Valve does not often make use of open playtesting, in keeping with the company's tradition of tightly controlling what information they release to the public. However, both Dota 2 and Counter-Strike: Global Offensive were openly playtested, with beta invites being distributed to (and in some cases by) volunteers. Valve also has a general beta signup form on their website; this survey is intended to recruit testers both in the Seattle/Bellevue area and from other locations, to test new games and gaming hardware that Valve is developing. Role-playing games Due to the nature of pen-and-paper RPGs as opposed to video games, RPG playtests tend to focus more on ensuring that the game's mechanics are balanced and that the game flows smoothly in play. It is also more typical to see feedback from players cause game mechanics to be adjusted or altered, as it is usually easier to make such changes with an RPG than it would be with a video game. An example of a role-playing game that was heavily playtested is the 5th edition of Dungeons & Dragons. For this game, Wizards of the Coast (WotC) used an open playtest with volunteers from their online community to evaluate the game as it was being developed. New playtest packets were distributed to the testers as WotC revised the game. WotC focused heavily on the results of this testing owing to the mixed reactions that the 4th edition rules received, showcasing another advantage of playtesting: helping to ensure that the final product will be a commercial success. The process produced feedback to WotC regarding which aspects of the game needed modifications or redesigns. While D&D's 4th edition did see some playtesting, this was mainly restricted to classes added after the game's initial release, such as the monk and the bard. The playtest documents were released through the online Dragon Magazine, and were originally available for both subscribers and non-subscribers. Fantasy Flight Games is running a playtest of the first installment of their new Star Wars RPG. This playtest is similar to Minecraft's in that the players must purchase the beta rules from Fantasy Flight before playing; the rules are not being released freely to the public. Updates made to the rules are released in PDF format on their website, but there is no word on whether playtesters will get a copy of the actual final draft. Paizo Publishing ran a completely open playtest through the alpha and beta stages of their Pathfinder Roleplaying Game in 2008 and 2009, releasing the rules as free PDF's (and also in print for the beta version) on their web store. Anyone could join the playtest by downloading the documents, running games using them, and posting their feedback on the Paizo message boards. This playtest, which was active for over a year, is the longest-running open playtest in RPG history to date, as well as being one of the largest due to its unrestricted nature. Board games In the board game industry, playtesting applies both to feedback gathered during the early design process as well as late stage exposure to the target audience by a game's publisher. Major types of boardgame testing include local testing — where a designer, developer, or publisher representative moderates the test in person, and remote testing, where groups receive copies of the game or files to assemble their own version. Wizards of the Coast ran a public playtest of their new Dungeon Command miniatures game. In this case, they used the feedback generated on the rules to improve the game but also used feedback on the playtest itself to improve logistics on the D&D Next playtest. Steve Jackson Games uses Munchkin players from the area around their offices to test new cards and expansions, as well as distributing playtest packages at conventions. According to the SJG website, this is done "so we [the developers] can observe carefully which cards work well, which jokes aren't as funny as we thought, and so on." Other games The playtest concept has been carried over into a full-fledged sport. Jim Foster, inventor and founder of the Arena Football League, tested his concept of indoor football in a special one-time game in 1986. This game was organized at the behest of NBC in order to test the viability of the game's concept. The Rockford Metros and the Chicago Politicians played the game in Rockford, Illinois. The test proved successful, and four teams began the league's first season the following year. Disadvantages The most dangerous risks with playtesting is that the playtest version of the game could be released over the internet, particularly if it is a video game or something presented in an electronic format. There are ways to prevent this; for example, requiring all players to log-on to the game's servers before it will launch, or implementing other forms of DRM. Even if the game itself is not leaked, details regarding its gameplay still may be. It is likely that over the course of an open playtest, even one where testers signed NDAs, that some details will be leaked onto the web. This is a major risk for companies wishing to preserve secrecy, particularly in nations where there are no way to prevent leaks from occurring. See also Quality control Software testing User acceptance testing References Further reading Role-playing game terminology Board game terminology Software testing
Playtest
[ "Engineering" ]
2,145
[ "Software engineering", "Software testing" ]
980,282
https://en.wikipedia.org/wiki/Gas%20lift
A gas lift or bubble pump is a type of pump that can raise fluid between elevations by introducing gas bubbles into a vertical outlet tube; as the bubbles rise within the tube they cause a drop in the hydrostatic pressure behind them, causing the fluid to be pulled up. Gas lifts are commonly used as artificial lifts for water or oil, using compressed air or water vapor. Gas lifts have been used for a variety of applications: Coffee percolators and electric drip coffeemakers use vaporized water to lift hot water Airlift pumps use compressed air to lift water Pulser pumps use a subterranean air chamber to lift underground water Suction dredges use a variety of the gas lift called an airlift pump to vacuum mud, sand and debris Mist lifts use vaporized water to draw seawater in ocean thermal energy conversion systems Petroleum industry uses In the United States, gas lift is used in 10% of the oil wells that have insufficient reservoir pressure to produce the well. In the petroleum industry, the process involves injecting gas through the tubing-casing annulus. Injected gas aerates the fluid to reduce its density; the formation pressure is then able to lift the oil column and forces the fluid out of the wellbore. Gas may be injected continuously or intermittently, depending on the producing characteristics of the well and the arrangement of the gas-lift equipment. The amount of gas to be injected to maximize oil production varies based on well conditions and geometries. Too much or too little injected gas will result in less than maximum production. Generally, the optimal amount of injected gas is determined by well tests, where the rate of injection is varied and liquid production (oil and perhaps water) is measured. Alternatively, mathematical models can be used to estimate the optimum gas injection rate. Such models offer significant economic benefit, since they allow one to simulate the performance of an actual or planned gas-lifted well using a digital replica of the well. Although the gas is recovered from the oil at a later separation stage, the process requires energy to drive a compressor to raise the pressure of the gas to a level where it can be re-injected. The gas-lift mandrel is a device installed in the tubing string of a gas-lift well onto which or into which a gas-lift valve is fitted. There are two common types of mandrels. In a conventional gas-lift mandrel, a gas-lift valve is installed as the tubing is placed in the well. Thus, to replace or repair the valve, the tubing string must be pulled. In the side-pocket mandrel, however, the valve is installed and removed by wireline while the mandrel is still in the well, eliminating the need to pull the tubing to repair or replace the valve. A gas-lift valve is a device installed on (or in) a gas-lift mandrel, which in turn is put on the production tubing of a gas-lift well. Tubing and casing pressures cause the valve to open and close, thus allowing gas to be injected into the fluid in the tubing to cause the fluid to rise to the surface. In the lexicon of the industry, gas-lift mandrels are said to be "tubing retrievable" wherein they are deployed and retrieved attached to the production tubing. See gas-lift mandrel. Gas lift operation can be optimized in different ways. The newest way is using risk-optimization which considers all aspects for gas lift allocation. History Invented by Norman J. Rees and Albert W Zeuthen in 1956, Current Assignee ExxonMobil Oil Corp. Air lift uses compressed air to lift water in operations such as dredging and underwater archeology. It is also found in aquariums to keep water circulating. These forms of lift were used as far back as 1797 in mines to lift water from mine shafts. These systems used single point injection of air into the liquid stream, normally through a foot valve at the bottom of the string. Gas lift was used as early as 1864 in Pennsylvania to lift oil wells, also using compressed air, via an air pipe bringing the air to the bottom of the well. Air was used in Texas for large-scale artificial lift. In 1920 natural gas replaced air, lowering the risk of explosion. From 1929 until 1945 about 25000 patents were issued on different types of gas lift valves that could be used for unloading in stages. Some of these systems involved moving the tubing, or using wireline sinker bars to change the lift point. Others were spring operated valves. Ultimately, in 1944 W.R. King patented the pressurized bellows valve that is used today. In 1951 the sidepocket mandrel was developed for selectively positioning and retrieving gas lift valves with wireline. See also References External links Kermit Brown. The Technology of Artificial Lift Methods, vol 2A. The Petroleum Publishing Company, 1980. “Subsurface Equipment/Artificial Lift: Maximizing Production from the Well”, May 1999 JPT Video showing a bubble pump in action Pumps Petroleum production Gas technologies Articles containing video clips
Gas lift
[ "Physics", "Chemistry" ]
1,048
[ "Pumps", "Hydraulics", "Physical systems", "Turbomachinery" ]
980,328
https://en.wikipedia.org/wiki/Cauchy%20space
In general topology and analysis, a Cauchy space is a generalization of metric spaces and uniform spaces for which the notion of Cauchy convergence still makes sense. Cauchy spaces were introduced by H. H. Keller in 1968, as an axiomatic tool derived from the idea of a Cauchy filter, in order to study completeness in topological spaces. The category of Cauchy spaces and Cauchy continuous maps is Cartesian closed, and contains the category of proximity spaces. Definition Throughout, is a set, denotes the power set of and all filters are assumed to be proper/non-degenerate (i.e. a filter may not contain the empty set). A Cauchy space is a pair consisting of a set together a family of (proper) filters on having all of the following properties: For each the discrete ultrafilter at denoted by is in If is a proper filter, and is a subset of then If and if each member of intersects each member of then An element of is called a Cauchy filter, and a map between Cauchy spaces and is Cauchy continuous if ; that is, the image of each Cauchy filter in is a Cauchy filter base in Properties and definitions Any Cauchy space is also a convergence space, where a filter converges to if is Cauchy. In particular, a Cauchy space carries a natural topology. Examples Any uniform space (hence any metric space, topological vector space, or topological group) is a Cauchy space; see Cauchy filter for definitions. A lattice-ordered group carries a natural Cauchy structure. Any directed set may be made into a Cauchy space by declaring a filter to be Cauchy if, given any element there is an element such that is either a singleton or a subset of the tail Then given any other Cauchy space the Cauchy-continuous functions from to are the same as the Cauchy nets in indexed by If is complete, then such a function may be extended to the completion of which may be written the value of the extension at will be the limit of the net. In the case where is the set of natural numbers (so that a Cauchy net indexed by is the same as a Cauchy sequence), then receives the same Cauchy structure as the metric space Category of Cauchy spaces The natural notion of morphism between Cauchy spaces is that of a Cauchy-continuous function, a concept that had earlier been studied for uniform spaces. See also References Eva Lowen-Colebunders (1989). Function Classes of Cauchy Continuous Maps. Dekker, New York, 1989. General topology
Cauchy space
[ "Mathematics" ]
560
[ "General topology", "Topology" ]
980,365
https://en.wikipedia.org/wiki/Ring%20species
In biology, a ring species is a connected series of neighbouring populations, each of which interbreeds with closely sited related populations, but for which there exist at least two "end populations" in the series, which are too distantly related to interbreed, though there is a potential gene flow between each "linked" population and the next. Such non-breeding, though genetically connected, "end populations" may co-exist in the same region (sympatry) thus closing a "ring". The German term , meaning a circle of races, is also used. Ring species represent speciation and have been cited as evidence of evolution. They illustrate what happens over time as populations genetically diverge, specifically because they represent, in living populations, what normally happens over time between long-deceased ancestor populations and living populations, in which the intermediates have become extinct. The evolutionary biologist Richard Dawkins remarks that ring species "are only showing us in the spatial dimension something that must always happen in the time dimension". Formally, the issue is that interfertility (ability to interbreed) is not a transitive relation; if A breeds with B, and B breeds with C, it does not mean that A breeds with C, and therefore does not define an equivalence relation. A ring species is a species with a counterexample to the transitivity of interbreeding. However, it is unclear whether any of the examples of ring species cited by scientists actually permit gene flow from end to end, with many being debated and contested. History The classic ring species is the Larus gull. In 1925 Jonathan Dwight found the genus to form a chain of varieties around the Arctic Circle. However, doubts have arisen as to whether this represents an actual ring species. In 1938, Claud Buchanan Ticehurst argued that the greenish warbler had spread from Nepal around the Tibetan Plateau, while adapting to each new environment, meeting again in Siberia where the ends no longer interbreed. These and other discoveries led Mayr to first formulate a theory on ring species in his 1942 study Systematics and the Origin of Species. Also in the 1940s, Robert C. Stebbins described the Ensatina salamanders around the Californian Central Valley as a ring species; but again, some authors such as Jerry Coyne consider this classification incorrect. Finally in 2012, the first example of a ring species in plants was found in a spurge, forming a ring around the Caribbean Sea. Speciation The biologist Ernst Mayr championed the concept of ring species, stating that it unequivocally demonstrated the process of speciation. A ring species is an alternative model to allopatric speciation, "illustrating how new species can arise through 'circular overlap', without interruption of gene flow through intervening populations…" However, Jerry Coyne and H. Allen Orr point out that rings species more closely model parapatric speciation. Ring species often attract the interests of evolutionary biologists, systematists, and researchers of speciation leading to both thought provoking ideas and confusion concerning their definition. Contemporary scholars recognize that examples in nature have proved rare due to various factors such as limitations in taxonomic delineation or, "taxonomic zeal"—explained by the fact that taxonomists classify organisms into "species", while ring species often cannot fit this definition. Other reasons such as gene flow interruption from "vicariate divergence" and fragmented populations due to climate instability have also been cited. Ring species also present an interesting case of the species problem for those seeking to divide the living world into discrete species. All that distinguishes a ring species from two separate species is the existence of the connecting populations; if enough of the connecting populations within the ring perish to sever the breeding connection then the ring species' distal populations will be recognized as two distinct species. The problem is whether to quantify the whole ring as a single species (despite the fact that not all individuals interbreed) or to classify each population as a distinct species (despite the fact that it interbreeds with its near neighbours). Ring species illustrate that species boundaries arise gradually and often exist on a continuum. Examples Many examples have been documented in nature. Debate exists concerning much of the research, with some authors citing evidence against their existence entirely. The following examples provide evidence that—despite the limited number of concrete, idealized examples in nature—continuums of species do exist and can be found in biological systems. This is often characterized by sub-species level classifications such as clines, ecotypes, complexes, and varieties. Many examples have been disputed by researchers, and equally "many of the [proposed] cases have received very little attention from researchers, making it difficult to assess whether they display the characteristics of ideal ring species." The following list gives examples of ring species found in nature. Some of the examples such as the Larus gull complex, the greenish warbler of Asia, and the Ensatina salamanders of America, have been disputed. Acanthiza pusilla and A. ewingii Acacia karroo Alauda skylarks (Alauda arvensis, A. japonica and A. gulgula) Alophoixus Aulostomus (Trumpetfish) Camarhynchus psittacula and C. pauper Chaerephon pumilus species complex Ensatina salamanders Euphorbia tithymaloides is a group within the spurge family that has reproduced and evolved in a ring through Central America and the Caribbean, meeting in the Virgin Islands where they appear to be morphologically and ecologically distinct. Great tit (however, some studies dispute this example) The greenish warbler (Phylloscopus trochiloides) forms a species ring, around the Himalayas. It is thought to have spread from Nepal around the inhospitable Tibetan Plateau, to rejoin in Siberia, where the plumbeitarsus and the viridanus appeared to no longer mutually reproduce. Hoplitis producta House mouse Junonia coenia and J. genoveva/J. evarete Lalage leucopygialis, L. nigra, and L. sueurii Larus gulls form a circumpolar "ring" around the North Pole. The European herring gull (L. argentatus argenteus), which lives primarily in Great Britain and Ireland, can hybridize with the American herring gull (L. smithsonianus), (living in North America), which can also hybridize with the Vega or East Siberian herring gull (L. vegae), the western subspecies of which, Birula's gull (L. vegae birulai), can hybridize with Heuglin's gull (L. heuglini), which in turn can hybridize with the Siberian lesser black-backed gull (L. fuscus). All four of these live across the north of Siberia. The last is the eastern representative of the lesser black-backed gulls back in north-western Europe, including Great Britain. The lesser black-backed gulls and herring gulls are sufficiently different that they do not normally hybridize; thus the group of gulls forms a continuum except where the two lineages meet in Europe. However, a 2004 genetic study entitled "The herring gull complex is not a ring species" has shown that this example is far more complex than presented here (Liebers et al., 2004): this example only speaks to the complex of species from the classical herring gull through lesser black-backed gull. There are several other taxonomically unclear examples that belong in the same species complex, such as yellow-legged gull (L. michahellis), glaucous gull (L. hyperboreus), and Caspian gull (L. cachinnans). Pelophylax nigromaculatus and P. porosus/P. porosus brevipodus (the names and classification of these species have changed since the publication suggesting a ring species) Pernis ptilorhynchus and P. celebensis Perognathus amplus and P. longimembris Peromyscus maniculatus Phellinus Platycercus elegans (Crimson rosella) complex Drosophila paulistorum Phylloscopus collybita and P. sindianus Phylloscopus (Willow warblers) Powelliphanta Rhymogona silvatica and R. cervina (the names and classification of these species have changed since the publication suggesting a ring species) Melospiza melodia, a song sparrow, forms a ring around the Sierra Nevada of California with the subspecies heermanni and fallax meeting in the vicinity of the San Gorgonio Pass. Todiramphus chloris and T. cinnamominus See also Dialect continuum, a similar concept in linguistics Intergradation References External links Greenish Warbler Greenish Warbler maps and songs Ensatina salamander , by Peter Hadfield (potholer54) Nova Scotia Museum of Natural History: Birds of Nova Scotia Hybrid Gulls Breeding in Belgium Species Evolutionary biology
Ring species
[ "Biology" ]
1,902
[ "Evolutionary biology" ]
980,435
https://en.wikipedia.org/wiki/Naturalistic%20observation
Naturalistic observation, sometimes referred to as fieldwork, is a research methodology in numerous fields of science including ethology, anthropology, linguistics, the social sciences, and psychology, in which data are collected as they occur in nature, without any manipulation by the observer. Examples range from watching an animal's eating patterns in the forest to observing the behavior of students in a school setting. During naturalistic observation, researchers take great care using unobtrusive methods to avoid interfering with the behavior they are observing. Naturalistic observation contrasts with analog observation in an artificial setting that is designed to be an analog of the natural situation, constrained so as to eliminate or control for effects of any variables other than those of interest. There is similarity to observational studies in which the independent variable of interest cannot be experimentally controlled for ethical or logistical reasons. Naturalistic observation has both advantages and disadvantages as a research methodology. Observations are more credible because the behavior occurs in a real, typical scenario as opposed to an artificial one generated within a lab. Behavior that could never occur in controlled laboratory environment can lead to new insights. Naturalistic observation also allows for study of events that are deemed unethical to study experimentally, such as the impact of high school shootings on students attending the high school. However, because extraneous variables cannot be controlled as in a laboratory, it is difficult to replicate findings and demonstrate their reliability. In particular, if subjects know they are being observed they may behave differently than otherwise. It may be difficult to generalize findings of naturalistic studies beyond the observed situations. See also Jane Goodall Meditation Natural history Observer-expectancy effect People watching Qualitative research Scholar-practitioner model Unobtrusive measures References Behaviorism Psychology experiments Qualitative research Naturalism (philosophy)
Naturalistic observation
[ "Biology" ]
363
[ "Behavior", "Behaviorism" ]
980,475
https://en.wikipedia.org/wiki/CR%20gas
CR gas or dibenzoxazepine (chemical name dibenz[b,f][1,4]oxazepine, is an incapacitating agent and a lachrymatory agent. CR was developed by the British Ministry of Defence as a riot control agent in the late 1950s and early 1960s. A report from the Porton Down laboratories described exposure as "like being thrown blindfolded into a bed of stinging nettles", and it earned the nickname "firegas". In its effects, CR gas is very similar to CS gas (o-chlorobenzylidene malononitrile), but twice as potent, even though there is little structural resemblance between the two. For example, 2 mg of dry CR causes skin redness in 10 min, 5 mg causes burning and erythremia, and strong pain. Water usually amplifies the pain effect of CR on skin. CR aerosols cause irritation at concentrations of 0.2 mcg/L, becoming intolerable at 3 mcg/L. The of CR through air inhalation 350 mg·min/L. Physical properties and deployment CR is a pale yellow crystalline solid with a spicy odor. It is slightly soluble in water and does not degrade in it. CR is usually presented as a microparticulate solid, in the form of suspension in a propylene glycol-based liquid. Contrary to its common name, it is not a gas but a solid at room temperature. The dibenz[b,f][1,4]oxazepine moiety is present in the typical antipsychotic drug loxapine, but, unlike CR, loxapine is not reactive and is not an irritant. CR was first synthesised in 1962. CR can be delivered either as an aerosol or a solution in water, making it able to be used in water cannons, smoke grenades, or handheld spray cans. For smoke it is usually fired in canisters (LACR) that heat up, producing an aerosol cloud at a steady rate. Effects CR gas is a lachrymatory agent (LA), exerting its effects through activation of the TRPA1 channel. Its effects are approximately 6 to 10 times more powerful than those of CS gas. CR causes intense skin irritation, in particular around moist areas; blepharospasm, causing temporary blindness; and coughing, gasping for breath, and panic. It is capable of causing immediate incapacitation. It is a suspected carcinogen. It is toxic, but less so than CS gas, by ingestion and exposure. However, it can be lethal in large quantities. In a poorly ventilated space, an individual may inhale a lethal dose within minutes. Death is caused by asphyxiation and pulmonary edema. The effect of CR is long-term and persistent. CR can persist on surfaces, especially porous ones, for up to 60 days. Treatment While CS can be decontaminated with a large amount of water, use of water may exacerbate the effects of CR. Skin contaminated with CR gas may become extremely painful in contact with water for up to 48 hours after contamination. Medical treatment is mostly palliative. The contaminated clothing has to be removed. The eyes and skin can be washed, the eye pain can be alleviated with medications. Use Egypt During the 2011 protests against the military government in Egypt, Egyptian security forces allegedly used CR gas in addition to the more commonly used, less debilitating CS gas. One protester described the gas as making him feel "as if your eyes are about to fall out; then you have trouble breathing, and you lose your sight". Egyptians used yeast as a treatment for CR side effects on skin. Mohammed ElBaradei also confirmed via Twitter that "tear gas with [a] nerve agent" is being used in Tahrir Square. The only gas that has been identified by human rights organizations in protests "is CS tear gas, typically used by police forces to disperse crowds," stated Egyptian journalist Farida Helmy. Egyptian use of CR gas has not been corroborated according to Human Rights Watch. France People occupying an area in Notre-Dame-des-Landes against an airport project suspect the use of CR gas by French Police and Army in April 2018. Northern Ireland It started being available in police and army supplies, as a water cannon additive and as spray cans, in 1973 and was at least still so in 1981. Republican groups in Northern Ireland have alleged that British Army and Royal Ulster Constabulary units used CR gas against Republican prisoners in the 1970s. Additionally, there are British military documents now declassified and in the public domain held in the records of the UK Ministry of Defence at the National Archives, London, that suggest that the British Army did deploy and use CR gas in Northern Ireland. Philippines CR tear gas was used in suppression of the mutiny in Makati that was led by Sen. Antonio Trillanes. The tear gas was fired in the building and all the people in the building including reporters were affected. South Africa In the late 1980s, CR was used in the townships in South Africa. It caused some fatalities, in particular among children. Sri Lanka The Tamil Tigers of Sri Lanka, an insurgent group in Sri Lanka used CR gas against government forces that were on an offensive to flush and defeat these insurgents during September 2008. Its use hindered the army's progress but ultimately proved ineffective in preventing the army from overrunning their positions. This is one of the first few cases of insurgents using CR gas as an insurgent weapon. Turkey In the June 2013 protest against the Turkish government, Turkish police allegedly used CR gas on protesters in Istanbul. Doctors in a makeshift first aid post in a Mosque judged it as such. Ukraine In Ukraine, CR gas is commonly used by special forces against demonstrators. Gas is packed in a form of spray cans "Cobra 1". For example, gas has been used on a demonstration dedicated to the Ukraine Independence Day (24 August 2011). Also massive gas usage has been documented during demonstrations against Language Law Draft in Kyiv on 3 and 4 July 2012. See also CS gas Loxapine Pepper spray and Tear gas Resiniferatoxin References Lachrymatory agents Riot control agents Dibenzoxazepines
CR gas
[ "Chemistry" ]
1,301
[ "Lachrymatory agents", "Riot control agents", "Chemical weapons" ]
980,657
https://en.wikipedia.org/wiki/Gang%20bang
A gang bang is a sexual activity in which one person is the central focus of the sexual activity of several people, usually more than three, sequentially or simultaneously. The term generally refers to a woman being the focus; one man with multiple women can be referred to as a "reverse gang bang". The term has become associated with the porn industry and usually describes a staged event whereby a woman has sex with several men in direct succession. Bukkake is a type of gang bang, originating in Japan, that focuses on the central person being ejaculated upon by male participants. Practice The largest gang bangs are sponsored by pornographic film companies, and recorded, but a gang bang is not unusual in the swinger community. It is more often considered to have multiple men and one woman, while a so-called "reverse gang bang" (one man and many women), which can be seen in pornography. Female-on-female and male-on-male gang bangs also happen. Gang bangs are not defined by the precise number of participants, but usually involve more than three people and may involve a dozen or more. When the gang bang is organized specifically to culminate with the (near) simultaneous or rapid serial ejaculations of all male participants on the central man or woman, then it may be referred to by the Japanese term bukkake. By contrast, three people engaged in sex is normally referred to as a threesome, and four people are normally referred to as a foursome. Gang bangs also differ from group sex, such as threesomes and foursomes, in that most (if not all) sexual acts during a gang bang are centered on or performed with just the central person. Although the participants of a gang bang may know each other, the spontaneity and anonymity of participants is often part of the attraction. Additionally, the other participants normally do not engage in sex acts with each other, but may stand nearby and masturbate while waiting for an opportunity to engage in sexual activity. Pornography Though there have been numerous gang bang pornographic films since the 1980s, they usually involved no more than half a dozen to a dozen men. However, starting with The World's Biggest Gangbang (1995) starring Annabel Chong with 251 partners, the pornographic industry began producing a series of films ostensibly setting gangbang records for most consecutive sex acts by one person in a short period. These kinds of films were financially successful, winning AVN Awards for the best-selling pornographic films of their year; however, the events were effectively unofficiated and the record-breaking claims often misleading. Jasmin St. Claire described her "record", purportedly set with 300 men in World's Biggest Gang Bang 2, as "among the biggest cons ever pulled off in the porn business", with merely about 30 men "strategically placed and filmed," only ten of whom were actually able to perform sexually on camera. See also Bukkake Cuckold Cuckquean Group sex Orgy Swinging Gang rape References Further reading Group sex Pornography terminology Sexual acts Pornography by genre de:Gruppensex#Gangbang
Gang bang
[ "Biology" ]
642
[ "Sexual acts", "Behavior", "Sexuality", "Mating" ]
980,666
https://en.wikipedia.org/wiki/Pioneer%20anomaly
The Pioneer anomaly, or Pioneer effect, was the observed deviation from predicted accelerations of the Pioneer 10 and Pioneer 11 spacecraft after they passed about on their trajectories out of the Solar System. The apparent anomaly was a matter of much interest for many years but has been subsequently explained by anisotropic radiation pressure caused by the spacecraft's heat loss. Both Pioneer spacecraft are escaping the Solar System but are slowing under the influence of the Sun's gravity. Upon very close examination of navigational data, the spacecraft were found to be slowing slightly more than expected. The effect is an extremely small acceleration towards the Sun, of , which is equivalent to a reduction of the outbound velocity by over a period of ten years. The two spacecraft were launched in 1972 and 1973. The anomalous acceleration was first noticed as early as 1980 but not seriously investigated until 1994. The last communication with either spacecraft was in 2003, but analysis of recorded data continues. Various explanations, both of spacecraft behavior and of gravitation itself, were proposed to explain the anomaly. Over the period from 1998 to 2012, one particular explanation became accepted. The spacecraft, which are surrounded by an ultra-high vacuum and are each powered by a radioisotope thermoelectric generator (RTG), can shed heat only via thermal radiation. If, due to the design of the spacecraft, more heat is emitted in a particular direction by what is known as a radiative anisotropy, then the spacecraft would accelerate slightly in the direction opposite of the excess emitted radiation due to the recoil of thermal photons. If the excess radiation and attendant radiation pressure were pointed in a general direction opposite the Sun, the spacecraft's velocity away from the Sun would be decreasing at a rate greater than could be explained by previously recognized forces, such as gravity and trace friction due to the interplanetary medium (imperfect vacuum). By 2012, several papers by different groups, all reanalyzing the thermal radiation pressure forces inherent in the spacecraft, showed that a careful accounting of this explains the entire anomaly; thus the cause is mundane and does not point to any new phenomenon or need to update the laws of physics. The most detailed analysis to date, by some of the original investigators, explicitly looks at two methods of estimating thermal forces, concluding that there is "no statistically significant difference between the two estimates and [...] that once the thermal recoil force is properly accounted for, no anomalous acceleration remains." Description Pioneer 10 and 11 were sent on missions to Jupiter and Jupiter/Saturn respectively. Both spacecraft were spin-stabilised in order to keep their high-gain antennas pointed towards Earth using gyroscopic forces. Although the spacecraft included thrusters, after the planetary encounters they were used only for semiannual conical scanning maneuvers to track Earth in its orbit, leaving them on a long "cruise" phase through the outer Solar System. During this period, both spacecraft were repeatedly contacted to obtain various measurements on their physical environment, providing valuable information long after their initial missions were complete. Because the spacecraft were flying with almost no additional stabilization thrusts during their "cruise", it is possible to characterize the density of the solar medium by its effect on the spacecraft's motion. In the outer Solar System this effect would be easily calculable, based on ground-based measurements of the deep space environment. When these effects were taken into account, along with all other known effects, the calculated position of the Pioneers did not agree with measurements based on timing the return of the radio signals being sent back from the spacecraft. These consistently showed that both spacecraft were closer to the inner Solar System than they should be, by thousands of kilometres—small compared to their distance from the Sun, but still statistically significant. This apparent discrepancy grew over time as the measurements were repeated, suggesting that whatever was causing the anomaly was still acting on the spacecraft. As the anomaly was growing, it appeared that the spacecraft were moving more slowly than expected. Measurements of the spacecraft's speed using the Doppler effect demonstrated the same thing: the observed redshift was less than expected, which meant that the Pioneers had slowed down more than expected. When all known forces acting on the spacecraft were taken into consideration, a very small but unexplained force remained. It appeared to cause an approximately constant sunward acceleration of for both spacecraft. If the positions of the spacecraft were predicted one year in advance based on measured velocity and known forces (mostly gravity), they were actually found to be some closer to the sun at the end of the year. This anomaly is now believed to be accounted for by thermal recoil forces. Explanation: thermal recoil force Starting in 1998, there were suggestions that the thermal recoil force was underestimated, and perhaps could account for the entire anomaly. However, accurately accounting for thermal forces was hard, because it needed telemetry records of the spacecraft temperatures and a detailed thermal model, neither of which was available at the time. Furthermore, all thermal models predicted a decrease in the effect with time, which did not appear in the initial analysis. One by one these objections were addressed. Many of the old telemetry records were found, and converted to modern formats. This gave power consumption figures and some temperatures for parts of the spacecraft. Several groups built detailed thermal models, which could be checked against the known temperatures and powers, and allowed a quantitative calculation of the recoil force. The longer span of navigational records showed the acceleration was in fact decreasing. In July 2012, Slava Turyshev et al. published a paper in Physical Review Letters that explained the anomaly. The work explored the effect of the thermal recoil force on Pioneer 10, and concluded that "once the thermal recoil force is properly accounted for, no anomalous acceleration remains." Although the paper by Turyshev et al. has the most detailed analysis to date, the explanation based on thermal recoil force has the support of other independent research groups, using a variety of computational techniques. Examples include "thermal recoil pressure is not the cause of the Rosetta flyby anomaly but likely resolves the anomalous acceleration observed for Pioneer 10." and "It is shown that the whole anomalous acceleration can be explained by thermal effects". Indications from other missions The Pioneers were uniquely suited to discover the effect because they have been flying for long periods of time without additional course corrections. Most deep-space probes launched after the Pioneers either stopped at one of the planets, or used thrusting throughout their mission. The Voyagers flew a mission profile similar to the Pioneers, but were not spin stabilized. Instead, they required frequent firings of their thrusters for attitude control to stay aligned with Earth. Spacecraft like the Voyagers acquire small and unpredictable changes in speed as a side effect of the frequent attitude control firings. This 'noise' makes it impractical to measure small accelerations such as the Pioneer effect; accelerations as large as 10−9 m/s2 would be undetectable. Newer spacecraft have used spin stabilization for some or all of their mission, including both Galileo and Ulysses. These spacecraft indicate a similar effect, although for various reasons (such as their relative proximity to the Sun) firm conclusions cannot be drawn from these sources. The Cassini mission has reaction wheels as well as thrusters for attitude control, and during cruise could rely for long periods on the reaction wheels alone, thus enabling precision measurements. It also had radioisotope thermoelectric generators (RTGs) mounted close to the spacecraft body, radiating kilowatts of heat in hard-to-predict directions. After Cassini arrived at Saturn, it shed a large fraction of its mass from the fuel used in the insertion burn and the release of the Huygens probe. This increases the acceleration caused by the radiation forces because they are acting on less mass. This change in acceleration allows the radiation forces to be measured independently of any gravitational acceleration. Comparing cruise and Saturn-orbit results shows that for Cassini, almost all the unmodelled acceleration was due to radiation forces, with only a small residual acceleration, much smaller than the Pioneer acceleration, and with opposite sign. The non-gravitational acceleration of the deep space probe New Horizons has been measured at about sunward, somewhat larger than the effect on Pioneer. Modelling of thermal effects indicates an expected sunward acceleration of , and given the uncertainties, the acceleration appears consistent with thermal radiation as the source of the non-gravitational forces measured. The measured acceleration is slowly decreasing as would be expected from the decreasing thermal output of the RTG. Potential issues with the thermal solution There are two features of the anomaly, as originally reported, that are not addressed by the thermal solution: periodic variations in the anomaly, and the onset of the anomaly near the orbit of Saturn. First, the anomaly has an apparent annual periodicity and an apparent Earth sidereal daily periodicity with amplitudes that are formally greater than the error budget. However, the same paper also states this problem is most likely not related to the anomaly: "The annual and diurnal terms are very likely different manifestations of the same modeling problem. [...] Such a modeling problem arises when there are errors in any of the parameters of the spacecraft orientation with respect to the chosen reference frame." Second, the value of the anomaly measured over a period during and after the Pioneer 11 Saturn encounter had a relatively high uncertainty and a significantly lower value. The Turyshev, et al. 2012 paper compared the thermal analysis to the Pioneer 10 only. The Pioneer anomaly was unnoticed until after Pioneer 10 passed its Saturn encounter. However, the most recent analysis states: "Figure 2 is strongly suggestive that the previously reported "onset" of the Pioneer anomaly may in fact be a simple result of mis-modeling of the solar thermal contribution; this question may be resolved with further analysis of early trajectory data". Previously proposed explanations Before the thermal recoil explanation became accepted, other proposed explanations fell into two classes—"mundane causes" or "new physics". Mundane causes include conventional effects that were overlooked or mis-modeled in the initial analysis, such as measurement error, thrust from gas leakage, or uneven heat radiation. The "new physics" explanations proposed revision of our understanding of gravitational physics. If the Pioneer anomaly had been a gravitational effect due to some long-range modifications of the known laws of gravity, it did not affect the orbital motions of the major natural bodies in the same way (in particular those moving in the regions in which the Pioneer anomaly manifested itself in its presently known form). Hence a gravitational explanation would need to violate the equivalence principle, which states that all objects are affected the same way by gravity. It was therefore argued that increasingly accurate measurements and modelling of the motions of the outer planets and their satellites undermined the possibility that the Pioneer anomaly is a phenomenon of gravitational origin. However, others believed that our knowledge of the motions of the outer planets and dwarf planet Pluto was still insufficient to disprove the gravitational nature of the Pioneer anomaly. The same authors ruled out the existence of a gravitational Pioneer-type extra-acceleration in the outskirts of the Solar System by using a sample of Trans-Neptunian objects. The magnitude of the Pioneer effect () is numerically quite close to the product () of the speed of light and the Hubble constant , hinting at a cosmological connection, but this is now believed to be of no particular significance. In fact the latest Jet Propulsion Laboratory review (2010) undertaken by Turyshev and Toth claims to rule out the cosmological connection by considering rather conventional sources whereas other scientists provided a disproof based on the physical implications of cosmological models themselves. Gravitationally bound objects such as the Solar System, or even the Milky Way, are not supposed to partake of the expansion of the universe—this is known both from conventional theory and by direct measurement. This does not necessarily interfere with paths new physics can take with drag effects from planetary secular accelerations of possible cosmological origin. Deceleration model It has been viewed as possible that a real deceleration is not accounted for in the current model for several reasons. Gravity It is possible that deceleration is caused by gravitational forces from unidentified sources such as the Kuiper belt or dark matter. However, this acceleration does not show up in the orbits of the outer planets, so any generic gravitational answer would need to violate the equivalence principle (see modified inertia below). Likewise, the anomaly does not appear in the orbits of Neptune's moons, challenging the possibility that the Pioneer anomaly may be an unconventional gravitational phenomenon based on range from the Sun. Drag The cause could be drag from the interplanetary medium, including dust, solar wind and cosmic rays. However, the measured densities are too small to cause the effect. Gas leaks Gas leaks, including helium from the spacecraft's radioisotope thermoelectric generators (RTGs) have been thought as possible cause. Observational or recording errors The possibility of observational errors, which include measurement and computational errors, has been advanced as a reason for interpreting the data as an anomaly. Hence, this would result in approximation and statistical errors. However, further analysis has determined that significant errors are not likely because seven independent analyses have shown the existence of the Pioneer anomaly as of March 2010. The effect is so small that it could be a statistical anomaly caused by differences in the way data were collected over the lifetime of the probes. Numerous changes were made over this period, including changes in the receiving instruments, reception sites, data recording systems and recording formats. New physics Because the "Pioneer anomaly" does not show up as an effect on the planets, Anderson et al. speculated that this would be interesting if this was new physics. Later, with the Doppler shifted signal confirmed, the team again speculated that one explanation may lie with new physics, if not some unknown systemic explanation. Clock acceleration Clock acceleration was an alternate explanation to anomalous acceleration of the spacecraft towards the Sun. This theory took notice of an expanding universe, which was thought to create an increasing background 'gravitational potential'. The increased gravitational potential would then accelerate cosmological time. It was proposed that this particular effect causes the observed deviation from predicted trajectories and velocities of Pioneer 10 and Pioneer 11. From their data, Anderson's team deduced a steady frequency drift of over eight years. This could be mapped on to a clock acceleration theory, which meant all clocks would be changing in relation to a constant acceleration: in other words, that there would be a non-uniformity of time. Moreover, for such a distortion related to time, Anderson's team reviewed several models in which time distortion as a phenomenon is considered. They arrived at the "clock acceleration" model after completion of the review. Although the best model adds a quadratic term to defined International Atomic Time, the team encountered problems with this theory. This then led to non-uniform time in relation to a constant acceleration as the most likely theory. Definition of gravity modified The Modified Newtonian dynamics or MOND hypothesis proposed that the force of gravity deviates from the traditional Newtonian value to a very different force law at very low accelerations on the order of . Given the low accelerations placed on the spacecraft while in the outer Solar System, MOND may be in effect, modifying the normal gravitational equations. The Lunar Laser Ranging experiment combined with data of LAGEOS satellites refutes that simple gravity modification is the cause of the Pioneer anomaly. The precession of the longitudes of perihelia of the solar planets or the trajectories of long-period comets have not been reported to experience an anomalous gravitational field toward the Sun of the magnitude capable of describing the Pioneer anomaly. Definition of inertia modified MOND can also be interpreted as a modification of inertia, perhaps due to an interaction with vacuum energy, and such a trajectory-dependent theory could account for the different accelerations apparently acting on the orbiting planets and the Pioneer craft on their escape trajectories. A possible terrestrial test for evidence of a different model of modified inertia has also been proposed. Parametric time Another theoretical explanation was based on a possible non-equivalence of the atomic time and the astronomical time, which could give the same observational fingerprint as the anomaly. Celestial ephemerides in an expanding universe Another proposed explanation of Pioneer anomaly is that the background spacetime is described by a cosmological Friedmann–Lemaître–Robertson–Walker metric that is not Minkowski flat. In this model of spacetime manifold, light moves uniformly with respect to the conformal cosmological time whereas physical measurements are performed with the help of atomic clocks that count the proper time of observer coinciding with the cosmic time. This difference yields exactly the same numerical value and signature of the Doppler shift measured in the Pioneer experiment. However, this explanation requires the thermal effects be a small percentage of the total, in contradiction to the many studies that estimate it to be the bulk of the effect. Further research avenues It is possible, but not proven, that this anomaly is linked to the flyby anomaly, which has been observed in other spacecraft. Although the circumstances are very different (planet flyby vs. deep space cruise), the overall effect is similar—a small but unexplained velocity change is observed on top of a much larger conventional gravitational acceleration. The Pioneer spacecraft are no longer providing new data (the last contact was on 23 January 2003) and other deep-space missions that might be studied (Galileo and Cassini) were deliberately disposed of in the atmospheres of Jupiter and Saturn respectively at the ends of their missions. This leaves several remaining options for further research: Further analysis of the retrieved Pioneer data. This includes not only the data that was first used to detect the anomaly, but additional data that until recently was saved only in older, inaccessible computer formats and media. This data was recovered in 2006, converted to more modern formats, and is now available for analysis. The New Horizons spacecraft to Pluto is spin-stabilised for long intervals, and there were proposals to use it to investigate the anomaly. It was known that New Horizons would have the same problem that precluded good data from the cruise portion of Cassini mission—its RTG is mounted close to the spacecraft body, so thermal radiation from it, bouncing off the spacecraft, will produce a systematic thrust of a not-easily predicted magnitude, as large or larger than the Pioneer effect. However, it was hoped that despite any large systematic bias from the RTG, the 'onset' of the anomaly at or near the orbit of Saturn might be observed. A dedicated mission has also been proposed. Such a mission would probably need to surpass from the Sun in a hyperbolic escape orbit. Observations of asteroids around may provide insights if the anomaly's cause is gravitational. Meetings and conferences about the anomaly A meeting was held at the University of Bremen in 2004 to discuss the Pioneer anomaly. The Pioneer Explorer Collaboration was formed to study the Pioneer Anomaly and has hosted three meetings (2005, 2007, and 2008) at International Space Science Institute in Bern, Switzerland, to discuss the anomaly, and discuss possible means for resolving the source. Notes See also Flyby anomaly References Further reading The original paper describing the anomaly A lengthy survey of several years of debate by the authors of the original 1998 paper documenting the anomaly. The authors conclude, "Until more is known, we must admit that the most likely cause of this effect is an unknown systematic. (We ourselves are divided as to whether 'gas leaks' or 'heat' is this 'most likely cause.')" The ISSI meeting above has an excellent reference list divided into sections such as primary references, attempts at explanation, proposals for new physics, possible new missions, popular press, and so on. A sampling of these are shown here: Theory establishes a gravitational connection between the unexplained periastron advance observed in two binary star systems and the Pioneer anomaly. Further elaboration on a dedicated mission plan (restricted access) Popular press – STVG (Scalar-tensor-vector gravity) theory claims to predict Pioneer anomaly External links Shows number of publications about the Pioneer anomaly on arXiv.org, by year. Pioneer program Astrodynamics
Pioneer anomaly
[ "Engineering" ]
4,163
[ "Astrodynamics", "Aerospace engineering" ]
980,831
https://en.wikipedia.org/wiki/Mental%20abacus
The abacus system of mental calculation is a system where users mentally visualize an abacus to carry out arithmetical calculations. No physical abacus is used; only the answers are written down. Calculations can be made at great speed in this way. For example, in the Flash Anzan event at the All Japan Soroban Championship, champion Takeo Sasano was able to add fifteen three-digit numbers in just 1.7 seconds. This system is being propagated in China, Singapore, South Korea, Thailand, Malaysia, and Japan. Mental calculation is said to improve mental capability, increases speed of response, memory power, and concentration power. Many veteran and prolific abacus users in China, Japan, South Korea, and others who use the abacus daily, naturally tend to not use the abacus any more, but perform calculations by visualizing the abacus. This was verified when the right brain of visualisers showed heightened EEG activity when calculating, compared with others using an actual abacus to perform calculations. The abacus can be used routinely to perform addition, subtraction, multiplication, and division; it can also be used to extract square and cube roots. See also Abacus logic Abacus Trachtenberg system Chisanbop References External links Mental abacus does away with words, New Scientist, August 9, 2011 Competitions Games of mental skill Mental calculation Abacus Mathematics competitions
Mental abacus
[ "Mathematics" ]
286
[ "Mathematical objects", "Number stubs", "Mental calculation", "Arithmetic", "Numbers" ]
980,841
https://en.wikipedia.org/wiki/Frequency%20extender
In broadcast engineering, a frequency extender is an electronic device that expands the usable frequency range of POTS telephone lines. It also allows high-fidelity analog audio to be sent over regular telephone lines, without the loss of lower audio frequencies (bass). It is an extended concept of a telephone hybrid. The concept uses frequency shifting to overcome the narrow bandwidth of regular telephone systems, extending the usable range by approximately two octaves. The input signal is sent on one telephone line as-is, or in some cases upshifted to provide extra low-frequency response, and sent on a second line shifted down by 3 kHz, which is normally the upper bandpass limit in telephony. Thus, an audio frequency of 5 kHz is sent at 2 kHz. A receiver on the other end then shifts the second line back up and mixes it with the first. This results in greatly improved audio, adding a full octave of range, and pushing the total bandpass to 6 kHz. The sound is then acceptable for voice, if not for music. It is also possible to add other lines, each increasing the bandpass by another 3 kHz. However, the law of diminishing returns takes over, because each successive octave is double the size of the last. A third line pushes the bandpass up 50% to 9 kHz, equivalent to AM radio. A fourth line would push it up 33% to 12 kHz. FM radio quality would require five telephone lines to be installed, pushing the bandpass up 25% to 15 kHz. The audio is shifted down by 6,9, and 12 kHz respectively for each additional line. Frequency extenders have been nearly eliminated by POTS codecs. See also Remote broadcast References Broadcast engineering Telephony E
Frequency extender
[ "Physics", "Engineering" ]
351
[ "Scalar physical quantities", "Broadcast engineering", "Frequency", "Physical quantities", "Electronic engineering", "Wikipedia categories named after physical quantities" ]
980,996
https://en.wikipedia.org/wiki/List%20of%20massively%20multiplayer%20online%20games
This is a list of notable massively multiplayer online games (MMOG), sorted by category. Massively multiplayer online first-person shooter games (MMOFPS) Massively multiplayer online role-playing games (MMORPG) Massively multiplayer online real-time strategy games (MMORTS) Massively multiplayer online turn-based strategy games Action Armored Warfare Cartoon Network Universe: FusionFall CrimeCraft DC Universe Online Infantry Online SubSpace War Thunder World of Tanks World of Warplanes World of Warships Browser games Agar.io Bin Weevils Blood Wars Castle of Heroes Club Penguin Command & Conquer: Tiberium Alliances Dark Orbit Empire & State Glitch Hattrick Ikariam Illyriad Imperia Online Little Space Heroes Lord of Ultima Miniconomy Moshi Monsters National Geographic Animal Jam NEO Shifters Ogame Omerta Pardus Pirate Galaxy Planetarion Poptropica Realm of the Mad God Runes of Magic Samurai Taisen Sentou Gakuen Slither.io Smallworlds Surviv.io Tenvi Terra Militaris TirNua Transformice Travian Tribal Wars Twin Skies Urban Dead World of the Living Dead: Resurrection Browser games with 3D rendering Battlestar Galactica Online Dark Orbit (Since 2015) Dead Frontier Family Guy Online Fragoria Free Realms RuneScape Tanki Online Building games Active Worlds Wurm Online Exploration Uru Live Puzzle Yohoho! Puzzle Pirates Social games Active Worlds Animal Jam Classic Bin Weevils EGO Flyff Free Realms Furcadia Habbo JumpStart Nicktropolis OurWorld Pirate101 (sister game to Wizard101) Red Light Center Second Life The Sims Online SmallWorlds Star Wars: The Old Republic Tanki Online There TirNua Toontown Online Transformice Virtual Magic Kingdom Virtual World of Kaneva vSide Wizard101 (sister game to Pirate101) Woozworld Space simulation Dual Universe Elite Dangerous Eve Online See also List of free massively multiplayer online games List of free multiplayer online games Multiplayer video game Massively multiplayer online role-playing game (MMORPG) Browser based game Chronology of MUDs MMOGs list games Massively multiplayer online it:Massively multiplayer online game
List of massively multiplayer online games
[ "Technology" ]
443
[ "Mobile content", "Social software" ]
981,045
https://en.wikipedia.org/wiki/Three-key%20exposition
In music, the three-key exposition is a particular kind of exposition used in sonata form. Normally, a sonata form exposition has two main key areas. The first asserts the primary key of the piece, that is, the tonic. The second section moves to a different key, establishes that key firmly, arriving ultimately at a cadence in that key. For the second key, composers normally chose the dominant for major-key sonatas, and the relative major (or less commonly, the minor-mode dominant) for minor-key sonatas. The three-key exposition moves not directly to the dominant or relative major, but indirectly via a third key; hence the name. Examples A very early example appears in the first movement of Haydn's String Quartet in D major, Op. 17 No. 6: the three keys are D major, C major, and A major. (C major is prepared by a modulation to its relative minor A minor, which happens to be the dominant minor of the original key.) Ludwig van Beethoven wrote a number of sonata movements during the earlier part of his career with three-key expositions. For the "third" (that is, the intermediate) key, Beethoven made various choices: the dominant minor (Piano Sonata No. 2, Op. 2 no. 2; String Quartet No. 5, Op. 18 no. 5), the supertonic minor (Piano Sonata No. 3, Op. 2 no. 3), and the relative minor (Piano Sonata No. 7, Op. 10 no. 3). Later, Beethoven used the supertonic major (Piano Sonata No. 9, Op. 14 no. 1, Piano Sonata No. 11, Op. 22), which is only a mild sort of three-key exposition, since the supertonic major is the dominant of the dominant, and commonly arises in any event as part of the modulation. As he entered his so-called "middle period," Beethoven abandoned the three-key exposition. This was part of a general change in the composer's work in which he moved closer to the older practice of Haydn, writing less discursive and more closely organized sonata movements. Franz Schubert, who liked discursive forms for the entirety of his short career, also employed the three-key expositions in many of his sonata movements. A famous example is the first movement of the Death and the Maiden Quartet in D minor, in which the exposition moves to F major and then A minor (translated to D major and minor respectively in the recapitulation), a formula that is repeated in the final movement; another is the Violin Sonata in A major (in which the second theme appears in G major and B major, while only the closing passage of the exposition is in the dominant, E major). His B major piano sonata, D 575, even uses a four-key exposition (B major, G major, E major, F-sharp major): this key scheme is literally transposed up a fourth for the recapitulation. The finale of his sixth symphony (D 589) is an even more extreme case: its exposition passes from C major to G major by way of A-flat major, F major, A major, and E-flat major, making a six-key exposition. Felix Mendelssohn followed the Death and the Maiden example in the first movement of his second Piano Trio, in which the E flat major second theme gives way to a G minor close (transposed to C major and minor in the recapitulation). The first movement of Frédéric Chopin's Piano Concerto in F minor also has a three-key exposition (F minor, A-flat major, C minor). The first movement of the second cello sonata by Brahms also employs a three-key exposition moving to C major and then A minor, the exposition of the first movement of the String Sextet in B flat involves an intervening theme in A major before reaching F, and the Piano Quartet in G minor involves secondary themes in D minor and major respectively (the first of these being omitted in the recapitulation and the second transposed to E flat major moving back to G minor). The D minor violin sonata has a final movement that moves through a calm second theme in C major before closing the exposition in A minor. Further reading Longyear, Rey M., and Kate R. Covington (1988). Sources of the three-key exposition. The Journal of Musicology 6(4), pp. 448-470. Rosen, Charles (1985) Sonata Forms. New York: Norton. Graham G. Hunt; When Structure and Design Collide: The Three-Key Exposition Revisited, Music Theory Spectrum, Volume 36, Issue 2, 1 December 2014, Pages 247–269. Formal sections in music analysis
Three-key exposition
[ "Technology" ]
984
[ "Components", "Formal sections in music analysis" ]
981,125
https://en.wikipedia.org/wiki/Operation%20Highjump
Operation HIGHJUMP, officially titled The United States Navy Antarctic Developments Program, 1946–1947, (also called Task Force 68), was a United States Navy (USN) operation to establish the Antarctic research base Little America IV. The operation was organized by Rear Admiral Richard E. Byrd, Jr., USN, Officer in Charge, Task Force 68, and led by Rear Admiral Ethan Erik Larson, USN, Commanding Officer, Task Force 68. Operation HIGHJUMP commenced 26 August 1946 and ended in late February 1947. Task Force 68 included 4,700 men, 70 ships, and 33 aircraft. HIGHJUMP's objectives, according to the U.S. Navy report of the operation, were: Training personnel and testing equipment in frigid conditions; Consolidating and extending the United States' sovereignty over the largest practicable area of the Antarctic continent (publicly denied as a goal before the expedition ended); Determining the feasibility of establishing, maintaining, and utilizing bases in the Antarctic and investigating possible base sites; Developing techniques for establishing, maintaining, and utilizing air bases on ice, with particular attention to later applicability of such techniques to operations in interior Greenland, where conditions are comparable to those in the Antarctic; Amplifying existing stores of knowledge of electromagnetic, geological, geographic, hydrographic, and meteorological propagation conditions in the area; Supplementary objectives of the Nanook expedition (a smaller equivalent conducted off eastern Greenland). Timeline The Western Group of ships reached the Marquesas Islands on December 12, 1946, whereupon the USS Henderson and USS Cacapon set up weather monitoring stations. By December 24, the USS Currituck had begun launching aircraft on reconnaissance missions. The Eastern Group of ships reached Peter I Island in late December 1946. On December 30, 1946, the Martin PBM-5 George 1 crashed on Thurston Island killing Ensign Maxwell A. Lopez, ARM1 Wendell K. Henderson, and ARM1 Frederick W. Williams. The other six crew members were rescued 13 days later. These and Vance N. Woodall, who died on January 21, 1947, were the only fatalities during Operation HIGHJUMP. On January 1, 1947, Lieutenant Commander Thompson and Chief Petty Officer John Marion Dickison utilized "Jack Browne" masks and DESCO oxygen rebreathers to log the first dive by Americans under the Antarctic. Paul Siple was the senior U.S. War Department representative on the expedition. Siple was the same Eagle Scout who accompanied Byrd on the previous Byrd Antarctic expeditions. The Central Group of ships reached the Bay of Whales on January 15, 1947, where they began construction of Little America IV. Naval ships and personnel were withdrawn back to the United States in late February 1947, and the expedition was terminated due to the early approach of winter and worsening weather conditions. Byrd discussed the lessons learned from the operation in an interview with Lee van Atta of International News Service held aboard the expedition's command ship, the USS Mount Olympus. The interview appeared in the Wednesday, March 5, 1947, edition of the Chilean newspaper El Mercurio and read in part as follows: Admiral Richard E. Byrd warned today that the United States should adopt measures of protection against the possibility of an invasion of the country by hostile planes coming from the polar regions. The admiral explained that he was not trying to scare anyone, but the cruel reality is that in case of a new war, the United States could be attacked by planes flying over one or both poles. This statement was made as part of a recapitulation of his own polar experience, in an exclusive interview with International News Service. Talking about the recently completed expedition, Byrd said that the most important result of his observations and discoveries is the potential effect that they have in relation to the security of the United States. The fantastic speed with which the world is shrinking – recalled the admiral – is one of the most important lessons learned during his recent Antarctic exploration. I have to warn my compatriots that the time has ended when we were able to take refuge in our isolation and rely on the certainty that the distances, the oceans, and the poles were a guarantee of safety. After the operation ended, a follow-up Operation Windmill returned to the area in order to provide ground-truthing to the aerial photography of HIGHJUMP from 1947 to 1948. Finn Ronne also financed a private operation to the same territory until 1948. As with other U.S. Antarctic expeditions, interested persons were allowed to send letters with enclosed envelopes to the base, where commemorative cachets were added to their enclosures, which were then returned to the senders. These souvenir philatelic covers are readily available at low cost. It is estimated that at least 150,000 such envelopes were produced, though their final number may be considerably higher. Participating units Task Force 68 Rear Admiral Richard H. Cruzen, USN, Commanding Eastern Group (Task Group 68.3) Capt. George J. Dufek, USN, Commanding Seaplane Tender USS Pine Island. Capt. Henry H. Caldwell, USN, Commanding Destroyer USS Brownson. Cdr. H.M.S. Gimber, USN, Commanding Tanker USS Canisteo. Capt. Edward K. Walker, USN, Commanding Western Group (Task Group 68.1) Capt. Charles A. Bond, USN, Commanding Seaplane Tender USS Currituck. Capt. John E. Clark, USN, Commanding Destroyer USS Henderson. Capt. C.F. Bailey, USN, Commanding Tanker USS Cacapon. Capt. R.A. Mitchell, USN, Commanding Central Group (Task Group 68.2) Rear Admiral Richard H. Cruzen, USN, Commanding Officer Communications and Flagship USS Mount Olympus. Capt. R. R. Moore, USN, Commanding Supplyship USS Yancey. Capt. J.E. Cohn, USN, Commanding Supplyship USS Merrick. Capt. John J. Hourihan, USN, Commanding Submarine USS Sennet. Cdr. Joseph B. Icenhower, USN, Commanding Icebreaker USS Burton Island. CDR Gerald L. Ketchum, USN, Commanding Icebreaker USCGC Northwind. Capt. Charles W. Thomas, USCG, Commanding Carrier Group (Task Group 68.4) Rear Adm. Richard E. Byrd, Jr. USN, (Ret), Officer in Charge Aircraft carrier and flagship USS Philippine Sea. Capt. Delbert S. Cornwell, USN, Commanding Base Group (Task Group 68.5) Capt. Clifford M. Campbell, USN, Commanding Base Little America IV Fatalities On December 30, 1946, aviation radiomen Wendell K. Henderson, Fredrick W. Williams, and Ensign Maxwell A. Lopez were killed when their plane crashed (named George 1a Martin PBM Mariner) during a blizzard. The surviving six crew members were rescued 13 days later, including aviation radioman James H. Robbins and co-pilot William Kearns. A plaque honoring the three killed crewmen was later erected at the McMurdo Station research base, and Mount Lopez on Thurston Island was named in honor of killed naval aviator Maxwell A. Lopez. In December 2004, an attempt was made to locate the remains of the plane. In 2007 a group called the George One Recovery Team was unsuccessful in trying to get direct military involvement and raise extensive funds from the United States Congress to try to find the bodies of the three men killed in the crash. On January 21, 1947, Vance N. Woodall died during a "ship unloading accident". In a crew profile, deckman Edward Beardsley described his worst memory as "when Seaman Vance Woodall died on the Ross Ice Shelf under a piece of roller equipment designed to 'pave' the ice to build an airstrip." In media The documentary about the expedition The Secret Land was filmed entirely by military photographers (both USN and US Army) and narrated by actors Robert Taylor, Robert Montgomery, and Van Heflin. It features Chief of Naval Operations Fleet Admiral Chester W. Nimitz in a scene where he is discussing Operation HIGHJUMP with admirals Byrd and Cruzen. The film re-enacted scenes of critical events, such as shipboard damage control and Admiral Byrd throwing items out of an airplane to lighten it to avoid crashing into a mountain. It won the 1948 Academy Award for Best Documentary Feature Film. See also List of Antarctic expeditions Military activity in the Antarctic New Swabia References Bibliography Further reading Navy Proudly Ends Its Antarctic Mission; Air National Guard Assumes 160-Year Task. Chicago Tribune; February 22, 1998. Antarctic Mayday: The Crash of the George One. Story of one of the survivors – James Haskin (Robbie) Robbins Operation Highjump: A Tragedy on the Ice External links The Papers of Harry B. Eisenberg Jr. at Dartmouth College Library History of Antarctica United States and the Antarctic Oceanography Military in Antarctica Aviation in Antarctica 1946 in Antarctica 1947 in Antarctica History of the Ross Dependency
Operation Highjump
[ "Physics", "Environmental_science" ]
1,846
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
981,153
https://en.wikipedia.org/wiki/Exergonic%20process
An exergonic process is one which there is a positive flow of energy from the system to the surroundings. This is in contrast with an endergonic process. Constant pressure, constant temperature reactions are exergonic if and only if the Gibbs free energy change is negative (∆G < 0). "Exergonic" (from the prefix exo-, derived for the Greek word ἔξω exō, "outside" and the suffix -ergonic, derived from the Greek word ἔργον ergon, "work") means "releasing energy in the form of work". In thermodynamics, work is defined as the energy moving from the system (the internal region) to the surroundings (the external region) during a given process. All physical and chemical systems in the universe follow the second law of thermodynamics and proceed in a downhill, i.e., exergonic, direction. Thus, left to itself, any physical or chemical system will proceed, according to the second law of thermodynamics, in a direction that tends to lower the free energy of the system, and thus to expend energy in the form of work. These reactions occur spontaneously. A chemical reaction is also exergonic when spontaneous. Thus in this type of reactions the Gibbs free energy decreases. The entropy is included in any change of the Gibbs free energy. This differs from an exothermic reaction or an endothermic reaction where the entropy is not included. The Gibbs free energy is calculated with the Gibbs–Helmholtz equation: where: T = temperature in kelvins (K) ΔG = change in the Gibbs free energy ΔS = change in entropy (at 298 K) as ΔS = Σ{S(Product)} − Σ{S(Reagent)} ΔH = change in enthalpy (at 298 K) as ΔH = Σ{H(Product)} − Σ{H(Reagent)} A chemical reaction progresses spontaneously only when the Gibbs free energy decreases, in that case the ΔG is negative. In exergonic reactions the ΔG is negative and in endergonic reactions the ΔG is positive: exergon endergon where: equals the change in the Gibbs free energy after completion of a chemical reaction. See also Endergonic Endergonic reaction Exothermic process Endothermic process Exergonic reaction Exothermic reaction Endothermic reaction Endotherm Ectotherm References Thermodynamic processes Chemical thermodynamics
Exergonic process
[ "Physics", "Chemistry" ]
546
[ "Chemical thermodynamics", "Thermodynamic processes", "Thermodynamics" ]
981,225
https://en.wikipedia.org/wiki/Nocebo
A nocebo effect is said to occur when a patient's expectations for a treatment cause the treatment to have a worse effect than it otherwise would have. For example, when a patient anticipates a side effect of a medication, they can experience that effect even if the "medication" is actually an inert substance. The complementary concept, the placebo effect, is said to occur when expectations improve an outcome. More generally, the nocebo effect is falling ill simply by consciously or subconsciously anticipating a harmful event. This definition includes anticipated events other than medical treatment. It has been applied to Havana Syndrome, where purported victims were anticipating attacks by foreign adversaries. This definition also applies to cases of electromagnetic hypersensitivity. Both placebo and nocebo effects are presumably psychogenic but can induce measurable changes in the body. One article that reviewed 31 studies on nocebo effects reported a wide range of symptoms that could manifest as nocebo effects, including nausea, stomach pains, itching, bloating, depression, sleep problems, loss of appetite, sexual dysfunction, and severe hypotension. Etymology and usage Walter Kennedy coined the term nocebo (Latin , "I shall harm", from , "I harm") in 1961 to denote the counterpart of placebo (Latin , "I shall please", from , "I please"), a substance that may produce a beneficial, healthful, pleasant, or desirable effect. Kennedy emphasized that his use of the term nocebo refers strictly to a subject-centered response, a quality "inherent in the patient rather than in the remedy". That is, he rejected the use of the term for pharmacologically induced negative side effects such as the ringing in the ears caused by quinine. That is not to say that the patient's psychologically induced response may not include physiological effects. For example, an expectation of pain may induce anxiety, which in turn causes the release of cholecystokinin, which facilitates pain transmission. Response In the narrowest sense, a nocebo response occurs when a drug-trial subject's symptoms are worsened by the administration of an inert, sham, or dummy (simulator) treatment, called a placebo. Placebos contain no chemicals (or any other agents) that could cause any of the observed worsening in the subject's symptoms, so any change for the worse must be due to some subjective factor. Adverse expectations can also cause anesthetic medications' analgesic effects to disappear. The worsening of the subject's symptoms or reduction of beneficial effects is a direct consequence of their exposure to the placebo, but the placebo has not chemically generated those symptoms. Because this generation of symptoms entails a complex of "subject-internal" activities, we can never speak in the strictest sense in terms of simulator-centered "nocebo effects", but only in terms of subject-centered "nocebo responses". Some observers attribute nocebo responses (or placebo responses) to a subject's gullibility, but there is no evidence that someone who manifests a nocebo/placebo response to one treatment will manifest a nocebo/placebo response to any other treatment; i.e., there is no fixed nocebo/placebo-responding trait or propensity. McGlashan, Evans & Orne found no evidence in 1969 of what they termed a placebo personality. In 1954, Lasagna, Mosteller, von Felsinger, and Beecher found in a carefully designed study that there was no way that any observer could determine, by testing or by interview, which subjects would manifest placebo reactions and which would not. Experiments have shown that no relationship exists between an person's measured hypnotic susceptibility and their manifestation of nocebo or placebo responses. Based on a biosemiotic model (2022), Goli explains how harm and/or healing expectations lead to a multimodal image and form transient allostatic or homeostatic interoceptive feelings, demonstrating how repetitive experiences of a potential body induce epigenetic changes and form new attractors, such as nocebos and placeboes, in the actual body. Effects Side effects of drugs It has been shown that, due to the nocebo effect, warning patients about drugs' side effects can contribute to the causation of such effects, whether the drug is real or not. This effect has been observed in clinical trials: according to a 2013 review, the dropout rate among placebo-treated patients in a meta-analysis of 41 clinical trials of Parkinson's disease treatments was 8.8%. A 2013 review found that nearly 1 out of 20 patients receiving a placebo in clinical trials for depression dropped out due to adverse events, which were believed to have been caused by the nocebo effect. In January 2022, a systematic review and meta-analysis concluded that nocebo responses accounted for 72% of adverse effects after the first COVID-19 vaccine dose and 52% after the second dose. Many studies show that the formation of nocebo responses are influenced by inappropriate health education, media work, and other discourse makers who induce health anxiety and negative expectations. Researchers studying the side effects of statins in UK determined that a large proportion of reported side effects were related not to any pharmacological cause but to the nocebo effect. In the UK, publicity in 2013 about the apparent side effects caused hundreds of thousands of patients to stop taking statins, leading to an estimated 2,000 additional cardiovascular events in the subsequent years. Electromagnetic hypersensitivity Evidence suggests that the symptoms of electromagnetic hypersensitivity are caused by the nocebo effect. Pain Verbal suggestion can cause hyperalgesia (increased sensitivity to pain) and allodynia (perception of a tactile stimulus as painful) as a result of the nocebo effect. Nocebo hyperalgesia is believed to involve the activation of cholecystokinin receptors. Ambiguity of medical usage Stewart-Williams and Podd argue that using the contrasting terms "placebo" and "nocebo" for inert agents that produce pleasant, health-improving, or desirable outcomes and unpleasant, health-diminishing, or undesirable outcomes (respectively) is extremely counterproductive. For example, precisely the same inert agents can produce analgesia and hyperalgesia, the first of which, on this definition, would be a placebo, and the second a nocebo. A second problem is that the same effect, such as immunosuppression, may be desirable for a subject with an autoimmune disorder, but undesirable for most other subjects. Thus, in the first case, the effect would be a placebo, and in the second a nocebo. A third problem is that the prescriber does not know whether the relevant subjects consider the effects they experience desirable or undesirable until some time after the drugs have been administered. A fourth is that the same phenomena are generated in all the subjects, and generated by the same drug, which is acting in all of the subjects through the same mechanism. Yet because the phenomena in question have been subjectively considered desirable to one group but not the other, the phenomena are now being labeled in two mutually exclusive ways (i.e., placebo and nocebo), giving the false impression that the drug in question has produced two different phenomena. Ambiguity of anthropological usage Some people maintain that belief can kill (e.g., voodoo death: Cannon in 1942 describes a number of instances from a variety of different cultures) and or heal (e.g., faith healing). A self-willed death (due to voodoo hex, evil eye, pointing the bone procedure, etc.) is an extreme form of a culture-specific syndrome or mass psychogenic illness that produces a particular form of psychosomatic or psychophysiological disorder resulting in psychogenic death. Rubel in 1964 spoke of "culture-bound" syndromes, those "from which members of a particular group claim to suffer and for which their culture provides an etiology, diagnosis, preventive measures, and regimens of healing". Certain anthropologists, such as Robert Hahn and Arthur Kleinman, have extended the placebo/nocebo distinction into this realm to allow a distinction to be made between rituals, such as faith healing, performed to heal, cure, or bring benefit (placebo rituals) and others, such as "pointing the bone", performed to kill, injure or bring harm (nocebo rituals). As the meaning of the two interrelated and opposing terms has extended, we now find anthropologists speaking, in various contexts, of nocebo or placebo (harmful or helpful) rituals: that might entail nocebo or placebo (unpleasant or pleasant) procedures; about which subjects might have nocebo or placebo (harmful or beneficial) beliefs; that are delivered by operators that might have nocebo or placebo (pathogenic, disease-generating or salutogenic, health-promoting) expectations; that are delivered to subjects that might have nocebo or placebo (negative, fearful, despairing or positive, hopeful, confident) expectations about the ritual; that are delivered by operators who might have nocebo or placebo (malevolent or benevolent) intentions, in the hope that the rituals will generate nocebo or placebo (lethal, injurious, harmful or restorative, curative, healthy) outcomes; and, that all of this depends upon the operator's overall beliefs in the nocebo ritual's harmful nature or the placebo ritual's beneficial nature. Yet it may become even more terminologically complex, for as Hahn and Kleinman indicate, there can also be cases of paradoxical nocebo outcomes from placebo rituals and placebo outcomes from nocebo rituals (see also unintended consequences). In 1973, writing from his extensive experience of treating cancer (including more than 1,000 melanoma cases) at Sydney Hospital, Milton warned of the impact of the delivery of a prognosis, and how many of his patients, upon receiving their prognosis, gave up hope and died a premature death: "there is a small group of patients in whom the realization of impending death is a blow so terrible that they are quite unable to adjust to it, and they die rapidly before the malignancy seems to have developed enough to cause death. This problem of self-willed death is in some ways analogous to the death produced in primitive peoples by witchcraft ('pointing the bone')". Ethics Some researchers have pointed out that the harm caused by communicating with patients about potential treatment adverse events raises an ethical issue. To respect their autonomy, one must inform a patient about harms a treatment may cause. Yet the way in which potential harms are communicated could cause additional harm, which may violate the ethical principle of non-maleficence. It is possible that nocebo effects can be reduced while respecting autonomy using different models of informed consent, including the use of a framing effect and the authorized concealment. See also Notes References External links Nocebo and nocebo effect The nocebo response The Nocebo Effect: Placebo's Evil Twin What modifies a healing response The science of voodoo: When mind attacks body, New Scientist The Effect of Treatment Expectation on Drug Efficacy: Imaging the Analgesic Benefit of the Opioid Remifentanil This Video Will Hurt (The Nocebo Effect), via YouTube BBC Discovery program on the nocebo effect What is the Nocebo effect? Clinical research Cultural anthropology Drug discovery 1960s neologisms Latin medical words and phrases Medical terminology Medicinal chemistry Mind–body interventions Somatic psychology
Nocebo
[ "Chemistry", "Biology" ]
2,459
[ "Life sciences industry", "Drug discovery", "nan", "Medicinal chemistry", "Biochemistry" ]
981,465
https://en.wikipedia.org/wiki/Yi%20Xing
Yi Xing (, 683–727), born Zhang Sui (), was a Chinese astronomer, Buddhist monk, inventor, mathematician, mechanical engineer, and philosopher during the Tang dynasty. His astronomical celestial globe featured a liquid-driven escapement, the first in a long tradition of Chinese astronomical clockworks. Science and technology Astrogeodetic survey In the early 8th century, the Tang court put Yi Xing in charge of an astrogeodetic survey. This survey had many purposes. It was established in order to obtain new astronomical data that would aid in the prediction of solar eclipses. The survey was also initiated so that flaws in the calendar system could be corrected and a new, updated calendar installed in its place. The survey was also essential in determining the arc measurement, i.e., the length of meridian arc-although Yi Xing, who did not know the Earth was spherical, did not conceptualize his measurements in these terms. This would resolve the confusion created by the earlier practice of using the difference between shadow lengths of the sun observed at the same time at two places to determine the ground distance between them. Yi Xing had thirteen test sites established throughout the empire, extending from Jiaozhou in Vietnam — at latitude 17°N — to the region immediately south of Lake Baikal — latitude 50°N. There were three observations done for each site, one for the height of polaris, one for the shadow lengths of summer, and one for the shadow lengths of winter. The latitudes were determined from this data, while the Tang calculation for the length of one degree of meridian was fairly accurate compared to modern calculations. Yi Xing understood the variations in the length of a degree of meridian, and criticized earlier scholars who permanently fixed an estimate for shadow lengths for the duration of the entire year. The escapement and celestial globe Yi Xing was famed for his genius, known to have calculated the number of possible positions on a go board game (though without a symbol for zero as he had difficulties expressing the number). He, along with his associate, the mechanical engineer and politician Liang Lingzan, is best known for applying the earliest-known escapement mechanism to a water-powered celestial globe. However, Yi Xing's mechanical achievements were built upon the knowledge and efforts of previous Chinese mechanical engineers, such as the statesman and master of gear systems Zhang Heng (78–139) of the Han dynasty, the mechanical engineer Ma Jun (200–265) of the Three Kingdoms, and the Daoist Li Lan (c. 450) of the Southern and Northern Dynasties period. It was the earlier Chinese inventor Zhang Heng during the Han dynasty who was the first to apply hydraulic power (i.e. a waterwheel and water clock) in mechanically-driving and rotating his equatorial armillary sphere. The arrangement followed the model of a water-wheel using the drip of a clepsydra (see water clock), which ultimately exerted force on a lug to rotate toothed-gears on a polar-axis shaft. With this, the slow computational movement rotated the armillary sphere according to the recorded movements of the planets and stars. Yi Xing also owed much to the scholarly followers of Ma Jun, who had employed horizontal jack-wheels and other mechanical toys worked by waterwheels. The Daoist Li Lan was an expert at working with water clocks, creating steelyard balances for weighing water that was used in the tank of the clepsydra, providing more inspiration for Yi Xing. Like the earlier water-power employed by Zhang Heng and the later escapement mechanism in the astronomical clock tower engineered and erected by Su Song (1020–1101), Yi Xing's celestial globe employed water-power in order for it to rotate and function properly. The British biochemist, historian, and sinologist Joseph Needham states (Wade–Giles spelling): In regards to mercury instead of water (as noted in the quote above), the first to apply liquid mercury for motive power of an armillary sphere was Zhang Sixun in 979 AD (because mercury would not freeze during winter). During his age, the Song dynasty (960–1279) era historical text of the Song Shi mentions Yi Xing and the reason why his armillary sphere did not survive the ages after the Tang (Wade–Giles spelling): Earlier Tang era historical texts of the 9th century have this to say of Yi Xing's work in astronomical instruments in the 8th century (Wade–Giles spelling): Buddhist scholarship Yi Xing wrote a commentary on the Mahavairocana Tantra. This work had a strong influence on the Japanese monk Kūkai and was key in his establishment of Shingon Buddhism. In his honor At the Tiantai-Buddhist Guoqing Temple of Mount Tiantai in Zhejiang Province, there is a Chinese pagoda erected directly outside the temple known as the Memorial Pagoda of Monk Yi Xing. His tomb is also located on Mount Tiantai. See also List of Chinese people List of inventors List of mechanical engineers Verge escapement Villard de Honnecourt Notes References Bowman, John S. (2000). Columbia Chronologies of Asian History and Culture. New York: Columbia University Press. Fry, Tony (2001). The Architectural Theory Review: Archineering in Chinatime. Sydney: University of Sydney Press. Ju, Zan, "Yixing". Encyclopedia of China (Religion Edition), 1st ed. Needham, Joseph (1986). Science and Civilization in China: Volume 3. Taipei: Caves Books, Ltd. Needham, Joseph (1986). Science and Civilization in China: Volume 4, Part 2. Taipei: Caves Books, Ltd. Boscaro, Adriana (2003) Rethinking Japan: Social Sciences, Ideology and Thought. Routledge. 0-904404-79-x p. 330 External links Yi Xing at Chinaculture.org Yi Xing's Tomb Tiantai Mountain Yi Xing at the University of Maine 683 births 727 deaths 8th-century Buddhists 8th-century Buddhist monks 8th-century Chinese astronomers 8th-century Chinese philosophers 8th-century Chinese writers 8th-century engineers 8th-century inventors 8th-century mathematicians Astronomical instrument makers Chinese Buddhists Chinese inventors Chinese mechanical engineers Chinese scholars of Buddhism Chinese science writers Chinese scientific instrument makers Engineers from Henan Hydraulic engineers Mathematicians from Henan Medieval Chinese mathematicians Philosophers from Henan Tang dynasty Buddhist monks Tang dynasty philosophers Technical writers Writers from Puyang
Yi Xing
[ "Astronomy" ]
1,331
[ "Astronomical instrument makers", "Astronomical instruments" ]
981,613
https://en.wikipedia.org/wiki/XMLGUI
XMLGUI is a KDE framework for designing the user interface of an application using XML, using the idea of actions. In this framework, the programmer designs various actions that their application can implement, with several actions defined for the programmer by the KDE framework, such as opening a file or closing the application. Each action can be associated with various data including icons, explanatory text, and tooltips. The interesting part to this design is that the actions are not inserted into the menus or toolbars by the programmer. Instead, the programmer supplies an XML file, which describes the layout of the menu bar and toolbar. Using this system, it is possible for the user to redesign the user interface of an application without needing to touch the source code of the program in question. In addition, XMLGUI is useful for the KParts component programming interface for KDE, as an application can easily integrate the GUI of a KPart into its own GUI. The Konqueror file manager is the canonical example of this feature. The current version is KDE Frameworks#KXMLGUI. Other projects The name is somewhat generic. The Beryl XML GUI was formerly named xmlgui, and there are a dozen other xml-oriented gui-libraries with the same project name. The KDE XMLGUI is one in a long series of projects that have not managed to pin down the term for the resulting programming base. See also Qt Style Sheets External links KDE Guide to the XMLGUI architecture KDE Frameworks KDE Platform User interface markup languages
XMLGUI
[ "Technology" ]
321
[ "KDE Platform", "KDE Frameworks", "Computing platforms" ]
981,616
https://en.wikipedia.org/wiki/POP-2
POP-2 (also called POP2) is a programming language developed around 1970 from the earlier language POP-1 (developed by Robin Popplestone in 1968, originally named COWSEL) by Robin Popplestone and Rod Burstall at the University of Edinburgh. It drew roots from many sources: the languages Lisp and ALGOL 60, and theoretical ideas from Peter J. Landin. It used an incremental compiler, which gave it some of the flexibility of an interpreted language, including allowing new function definitions at run time and modification of function definitions while a program runs (both of which are features of dynamic compilation), without the overhead of an interpreted language. Description Stack POP-2's syntax is ALGOL-like, except that assignments are in reverse order: instead of writing a := 3; one writes 3 -> a; The reason for this is that the language has explicit notion of an operand stack. Thus, the prior assignment can be written as two separate statements: 3; which evaluates the value 3 and leaves it on the stack, and -> a; which pops the top value off the stack and assigns it to the variable 'a'. Similarly, the function call f(x, y, z); can be written as x, y, z; f(); (commas and semicolons being largely interchangeable) or even x, y, z.f; or (x, y, z).f; Because of the stack-based paradigm, there is no need to distinguish between statements and expressions; thus, the two constructs if a > b then c -> e else d -> e close; and if a > b then c else d close -> e; are equivalent (use of , as hadn't become a common notation yet). Arrays and doublet functions There are no special language constructs to create arrays or record structures as they are commonly understood: instead, these are created with the aid of special builtin functions, e.g., (for arrays that can contain any type of item) and to create restricted types of items. Thus, array element and record field accessors are simply special cases of a doublet function: this is a function that had another function attached as its updater, which is called on the receiving side of an assignment. Thus, if the variable contains an array, then 3 -> a(4); is equivalent to updater(a)(3, 4); the builtin function returning the updater of the doublet. Of course, is a doublet and can be used to change the updater component of a doublet. Functions Variables can hold values of any type, including functions, which are first-class objects. Thus, the following constructs function max x y; if x > y then x else y close end; and vars max; lambda x y; if x > y then x else y close end -> max; are equivalent. An interesting operation on functions is partial application, (sometimes termed currying). In partial application, some number of the rightmost arguments of the function (which are the last ones placed on the stack before the function is involved) are frozen to given values, to produce a new function of fewer arguments, which is a closure of the original function. For instance, consider a function for computing general second-degree polynomials: function poly2 x a b c; a * x * x + b * x + c end; This can be bound, for instance as vars less1squared; poly2(% 1, -2, 1%) -> less1squared; such that the expression less1squared(3) applies the closure of poly2 with three arguments frozen, to the argument 3, returning the square of (3 - 1), which is 4. The application of the partially applied function causes the frozen values (in this case 1, -2, 1) to be added to whatever is already on the stack (in this case 3), after which the original function poly2 is invoked. It then uses the top four items on the stack, producing the same result as poly2(3, 1, -2, 1) i.e. 1*3*3 + (-2)*3 + 1 Operator definition In POP-2, it was possible to define new operations (operators in modern terms). vars operation 3 +*; lambda x y; x * x + y * y end -> nonop +* The first line declares a new operation +* with precedence (priority) 3. The second line creates a function f(x,y)=x*x+y*y, and assigns it to the newly declared operation +*. History The original version of POP-2 was implemented on an Elliott 4130 computer in the University of Edinburgh (with only 64 KB RAM, doubled to 128 KB in 1972). POP-2 was ported to the ICT 1900 series on a 1909 at Lancaster University by John Scott in 1968. In the mid-1970s, POP-2 was ported to BESM-6 (POPLAN System). In 1978 Hamish Dewar implemented a version of POP-2 specifically for use by Edinburgh University undergraduates in the AI2 (Artificial Intelligence, 2nd year level) class using the EMAS operating system. This implementation was written from scratch in the Edinburgh programming language, IMP. Later versions were implemented for Computer Technology Limited (CTL) Modular One, PDP-10, ICL 1900 series (running the operating system George). Julian Davies, in Edinburgh, implemented an extended version of POP-2, which he named POP-10 on the PDP-10 computer running TOPS-10. This was the first dialect of POP-2 that treated case as significant in identifier names, used lower case for most system identifiers, and supported long identifiers with more than 8 characters. Shortly after that, a new implementation known as WPOP (for WonderPop) was implemented by Robert Rae and Allan Ramsay in Edinburgh, on a research-council funded project. That version introduced caged address spaces, some compile-time syntactic typing (e.g., for integers and reals), and some pattern matching constructs for use with a variety of data structures. In parallel with that, Steve Hardy at University of Sussex implemented a subset of POP-2, which he named POP-11 which ran on a Digital Equipment Corporation (DEC) PDP-11/40 computer. It was originally designed to run on the DEC operating system RSX-11D, in time-shared mode for teaching, but that caused so many problems that an early version of Unix was installed and used instead. That version of Pop-11 was written in Unix assembly language, and code was incrementally compiled to an intermediate bytecode which was interpreted. That port was completed around 1976, and as a result, Pop-11 was used in several places for teaching. To support its teaching function, many of the syntactic features of POP-2 were modified, e.g., replacing with and adding a wider variety of looping constructs with closing brackets to match their opening brackets instead of the use of for all loops in POP-2. Pop-11 also introduced a pattern matcher for list structures, making it far easier to teach artificial intelligence (AI) programming. Around 1980, Pop-11 was ported to a VAX-11/780 computer by Steve Hardy and John Gibson, and soon after that it was replaced by a full incremental compiler (producing machine-code instead of an interpreted intermediate code). The existence of the compiler and all its subroutines at run time made it possible to support far richer language extensions than are possible with Macros, and as a result Pop-11 was used (by Steve Hardy, Chris Mellish and John Gibson) to produce an implementation of Prolog, using the standard syntax of Prolog, and the combined system became known as Poplog, to which Common Lisp and Standard ML were added later. This version was later ported to a variety of machines and operating systems and as a result Pop-11 became the dominant dialect of POP-2, still available in the Poplog system. Around 1986, a new AI company Cognitive Applications Ltd., collaborated with members of Sussex university to produce a variant of Pop-11 named AlphaPop running on Apple Mac computers, with integrated graphics. This was used for many commercial projects, and to teach AI programming in several universities. That it was implemented in an early dialect of C, using an idiosyncratic compiler made it very hard to maintain and upgrade to new versions of the Mac operating system. Also, AlphaPop was not "32-bit clean" due to the use of high address bits as tag bits to signify the type of objects, which was incompatible with the use of memory above 8 Mb on later Macintoshes. See also POP-11 programming language Poplog programming environment References General POP references Inline External links The Early Development of POP Computers and Thought: A practical Introduction to Artificial Intelligence An Introduction to the POP-2 Programming Language, by P. M. Burstall and J. S. Collins. POP-2 Reference Manual, by P. M. Burstall and J. S. Collins. Functional languages Lisp programming language family History of computing in the United Kingdom Programming languages Programming languages created in 1970 Science and technology in Edinburgh University of Edinburgh University of Sussex
POP-2
[ "Technology" ]
1,956
[ "History of computing", "History of computing in the United Kingdom" ]
981,631
https://en.wikipedia.org/wiki/Failure%20mode%20and%20effects%20analysis
Failure mode and effects analysis (FMEA; often written with "failure modes" in plural) is the process of reviewing as many components, assemblies, and subsystems as possible to identify potential failure modes in a system and their causes and effects. For each component, the failure modes and their resulting effects on the rest of the system are recorded in a specific FMEA worksheet. There are numerous variations of such worksheets. A FMEA can be a qualitative analysis, but may be put on a quantitative basis when mathematical failure rate models are combined with a statistical failure mode ratio database. It was one of the first highly structured, systematic techniques for failure analysis. It was developed by reliability engineers in the late 1950s to study problems that might arise from malfunctions of military systems. An FMEA is often the first step of a system reliability study. A few different types of FMEA analyses exist, such as: Functional Design Process Sometimes FMEA is extended to FMECA (failure mode, effects, and criticality analysis) to indicate that criticality analysis is performed too. FMEA is an inductive reasoning (forward logic) single point of failure analysis and is a core task in reliability engineering, safety engineering and quality engineering. A successful FMEA activity helps identify potential failure modes based on experience with similar products and processes—or based on common physics of failure logic. It is widely used in development and manufacturing industries in various phases of the product life cycle. Effects analysis refers to studying the consequences of those failures on different system levels. Functional analyses are needed as an input to determine correct failure modes, at all system levels, both for functional FMEA or piece-part (hardware) FMEA. A FMEA is used to structure mitigation for risk reduction based on either failure mode or effect severity reduction, or based on lowering the probability of failure or both. The FMEA is in principle a full inductive (forward logic) analysis, however the failure probability can only be estimated or reduced by understanding the failure mechanism. Hence, FMEA may include information on causes of failure (deductive analysis) to reduce the possibility of occurrence by eliminating identified (root) causes. Introduction The FME(C)A is a design tool used to systematically analyze postulated component failures and identify the resultant effects on system operations. The analysis is sometimes characterized as consisting of two sub-analyses, the first being the failure modes and effects analysis (FMEA), and the second, the criticality analysis (CA). Successful development of an FMEA requires that the analyst include all significant failure modes for each contributing element or part in the system. FMEAs can be performed at the system, subsystem, assembly, subassembly or part level. The FMECA should be a living document during development of a hardware design. It should be scheduled and completed concurrently with the design. If completed in a timely manner, the FMECA can help guide design decisions. The usefulness of the FMECA as a design tool and in the decision-making process is dependent on the effectiveness and timeliness with which design problems are identified. Timeliness is probably the most important consideration. In the extreme case, the FMECA would be of little value to the design decision process if the analysis is performed after the hardware is built. While the FMECA identifies all part failure modes, its primary benefit is the early identification of all critical and catastrophic subsystem or system failure modes so they can be eliminated or minimized through design modification at the earliest point in the development effort; therefore, the FMECA should be performed at the system level as soon as preliminary design information is available and extended to the lower levels as the detail design progresses. Remark: For more complete scenario modelling another type of reliability analysis may be considered, for example fault tree analysis (FTA); a deductive (backward logic) failure analysis that may handle multiple failures within the item and/or external to the item including maintenance and logistics. It starts at higher functional / system level. An FTA may use the basic failure mode FMEA records or an effect summary as one of its inputs (the basic events). Interface hazard analysis, human error analysis and others may be added for completion in scenario modelling. Functional failure mode and effects analysis The analysis should always be started by someone listing the functions that the design needs to fulfill. Functions are the starting point of a well done FMEA, and using functions as baseline provides the best yield of an FMEA. After all, a design is only one possible solution to perform functions that need to be fulfilled. This way an FMEA can be done on concept designs as well as detail designs, on hardware as well as software, and no matter how complex the design. When performing a FMECA, interfacing hardware (or software) is first considered to be operating within specification. After that it can be extended by consequently using one of the 5 possible failure modes of one function of the interfacing hardware as a cause of failure for the design element under review. This gives the opportunity to make the design robust against function failure elsewhere in the system. In addition, each part failure postulated is considered to be the only failure in the system (i.e., it is a single failure analysis). In addition to the FMEAs done on systems to evaluate the impact lower level failures have on system operation, several other FMEAs are done. Special attention is paid to interfaces between systems and in fact at all functional interfaces. The purpose of these FMEAs is to assure that irreversible physical and/or functional damage is not propagated across the interface as a result of failures in one of the interfacing units. These analyses are done to the piece part level for the circuits that directly interface with the other units. The FMEA can be accomplished without a CA, but a CA requires that the FMEA has previously identified system level critical failures. When both steps are done, the total process is called an FMECA. Ground rules The ground rules of each FMEA include a set of project selected procedures; the assumptions on which the analysis is based; the hardware that has been included and excluded from the analysis and the rationale for the exclusions. The ground rules also describe the indenture level of the analysis (i.e. the level in the hierarchy of the part to the sub-system, sub-system to the system, etc.), the basic hardware status, and the criteria for system and mission success. Every effort should be made to define all ground rules before the FMEA begins; however, the ground rules may be expanded and clarified as the analysis proceeds. A typical set of ground rules (assumptions) follows: Only one failure mode exists at a time. All inputs (including software commands) to the item being analyzed are present and at nominal values. All consumables are present in sufficient quantities. Nominal power is available Benefits Major benefits derived from a properly implemented FMECA effort are as follows: It provides a documented method for selecting a design with a high probability of successful operation and safety. A documented uniform method of assessing potential failure mechanisms, failure modes and their impact on system operation, resulting in a list of failure modes ranked according to the seriousness of their system impact and likelihood of occurrence. Early identification of single failure points (SFPS) and system interface problems, which may be critical to mission success and/or safety. They also provide a method of verifying that switching between redundant elements is not jeopardized by postulated single failures. An effective method for evaluating the effect of proposed changes to the design and/or operational procedures on mission success and safety. A basis for in-flight troubleshooting procedures and for locating performance monitoring and fault-detection devices. Criteria for early planning of tests. From the above list, early identifications of SFPS, input to the troubleshooting procedure and locating of performance monitoring / fault detection devices are probably the most important benefits of the FMECA. In addition, the FMECA procedures are straightforward and allow orderly evaluation of the design. History Procedures for conducting FMECA were described in 1949 in US Armed Forces Military Procedures document MIL-P-1629, revised in 1980 as MIL-STD-1629A. By the early 1960s, contractors for the U.S. National Aeronautics and Space Administration (NASA) were using variations of FMECA or FMEA under a variety of names. NASA programs using FMEA variants included Apollo, Viking, Voyager, Magellan, Galileo, and Skylab. The civil aviation industry was an early adopter of FMEA, with the Society for Automotive Engineers (SAE, an organization covering aviation and other transportation beyond just automotive, despite its name) publishing ARP926 in 1967. After two revisions, Aerospace Recommended Practice ARP926 has been replaced by ARP4761, which is now broadly used in civil aviation. During the 1970s, use of FMEA and related techniques spread to other industries. In 1971 NASA prepared a report for the U.S. Geological Survey recommending the use of FMEA in assessment of offshore petroleum exploration. A 1973 U.S. Environmental Protection Agency report described the application of FMEA to wastewater treatment plants. FMEA as application for HACCP on the Apollo Space Program moved into the food industry in general. The automotive industry began to use FMEA by the mid 1970s. The Ford Motor Company introduced FMEA to the automotive industry for safety and regulatory consideration after the Pinto affair. Ford applied the same approach to processes (PFMEA) to consider potential process induced failures prior to launching production. In 1993 the Automotive Industry Action Group (AIAG) first published an FMEA standard for the automotive industry. It is now in its fourth edition. The SAE first published related standard J1739 in 1994. This standard is also now in its fourth edition. In 2019 both method descriptions were replaced by the new AIAG / VDA FMEA handbook. It is a harmonization of the former FMEA standards of AIAG, VDA, SAE and other method descriptions. As of 2024, the AIAG / VDA FMEA Handbook is accepted by GM, Ford, Stellantis, Honda NA, BMW, Volkswagen Group, Mercedes-Benz Group AG (formerly Daimler AG), and Daimler Truck. Although initially developed by the military, FMEA methodology is now extensively used in a variety of industries including semiconductor processing, food service, plastics, software, and healthcare. Toyota has taken this one step further with its design review based on failure mode (DRBFM) approach. The method is now supported by the American Society for Quality which provides detailed guides on applying the method. The standard failure modes and effects analysis (FMEA) and failure modes, effects and criticality analysis (FMECA) procedures identify the product failure mechanisms, but may not model them without specialized software. This limits their applicability to provide a meaningful input to critical procedures such as virtual qualification, root cause analysis, accelerated test programs, and to remaining life assessment. To overcome the shortcomings of FMEA and FMECA a failure modes, mechanisms and effect analysis (FMMEA) has often been used. Following the release of IATF 16949:2016, an international quality standard that requires companies to have an organization-specific documented FMEA process, many original equipment manufacturers (OEMs) like Ford are updating their Customer Specific Requirements (CSR) to include the usage of specific FMEA software. For Ford specifically, these requirements had multiple-stage compliance deadlines of July and December of 2022. Basic terms The following covers some basic FMEA terminology. Action priority (AP) The AP replaces the former risk matrix and RPN in the AIAG / VDA FMEA handbook 2019. It makes a statement about the need for additional improvement measures. Failure The loss of a function under stated conditions. Failure mode The specific manner or way by which a failure occurs in terms of failure of the part, component, function, equipment, subsystem, or system under investigation. Depending on the type of FMEA performed, failure mode may be described at various levels of detail. A piece part FMEA will focus on detailed part or component failure modes (such as fully fractured axle or deformed axle, or electrical contact stuck open, stuck short, or intermittent). A functional FMEA will focus on functional failure modes. These may be general (such as no function, over function, under function, intermittent function, or unintended function) or more detailed and specific to the equipment being analyzed. A PFMEA will focus on process failure modes (such as inserting the wrong drill bit). Failure cause and/or mechanism Defects in requirements, design, process, quality control, handling or part application, which are the underlying cause or sequence of causes that initiate a process (mechanism) that leads to a failure mode over a certain time. A failure mode may have more causes. For example; "fatigue or corrosion of a structural beam" or "fretting corrosion in an electrical contact" is a failure mechanism and in itself (likely) not a failure mode. The related failure mode (end state) is a "full fracture of structural beam" or "an open electrical contact". The initial cause might have been "Improper application of corrosion protection layer (paint)" and /or "(abnormal) vibration input from another (possibly failed) system". Failure effect Immediate consequences of a failure on operation, or more generally on the needs for the customer / user that should be fulfilled by the function but now is not, or not fully, fulfilled. Indenture levels (bill of material or functional breakdown) An identifier for system level and thereby item complexity. Complexity increases as levels are closer to one. Local effect The failure effect as it applies to the item under analysis. Next higher level effect The failure effect as it applies at the next higher indenture level. End effect The failure effect at the highest indenture level or total system. Detection The means of detection of the failure mode by maintainer, operator or built in detection system, including estimated dormancy period (if applicable). Probability The likelihood of the failure occurring. Risk priority number (RPN) Severity (of the event) × probability (of the event occurring) × detection (probability that the event would not be detected before the user was aware of it). Severity The consequences of a failure mode. Severity considers the worst potential consequence of a failure, determined by the degree of injury, property damage, system damage and/or time lost to repair the failure. Remarks / mitigation / actions Additional info, including the proposed mitigation or actions used to lower a risk or justify a risk level or scenario. Example of FMEA worksheet Probability (P) It is necessary to look at the cause of a failure mode and the likelihood of occurrence. This can be done by analysis, calculations / FEM, looking at similar items or processes and the failure modes that have been documented for them in the past. A failure cause is looked upon as a design weakness. All the potential causes for a failure mode should be identified and documented. This should be in technical terms. Examples of causes are: Human errors in handling, Manufacturing induced faults, Fatigue, Creep, Abrasive wear, erroneous algorithms, excessive voltage or improper operating conditions or use (depending on the used ground rules). A failure mode may be given a Probability Ranking with a defined number of levels. This field is also often referred to as an Occurrence Rating. For a piece part FMEA, quantitative probability may be calculated from the results of a reliability prediction analysis and the failure mode ratios from a failure mode distribution catalog, such as RAC FMD-97. This method allows a quantitative FTA to use the FMEA results to verify that undesired events meet acceptable levels of risk. Severity (S) Determine the Severity for the worst-case scenario adverse end effect (state). It is convenient to write these effects down in terms of what the user might see or experience in terms of functional failures. Examples of these end effects are: full loss of function x, degraded performance, functions in reversed mode, too late functioning, erratic functioning, etc. Each end effect is given a Severity number (S) from, say, I (no effect) to V (catastrophic), based on cost and/or loss of life or quality of life. These numbers prioritize the failure modes (together with probability and detectability). Below a typical classification is given. Other classifications are possible. See also hazard analysis. Detection (D) The means or method by which a failure is detected, isolated by operator and/or maintainer and the time it may take. This is important for maintainability control (availability of the system) and it is especially important for multiple failure scenarios. This may involve dormant failure modes (e.g. No direct system effect, while a redundant system / item automatically takes over or when the failure only is problematic during specific mission or system states) or latent failures (e.g. deterioration failure mechanisms, like metal growing a crack, but not of critical length). It should be made clear how the failure mode or cause can be discovered by an operator under normal system operation or if it can be discovered by the maintenance crew by some diagnostic action or automatic built in system test. A dormancy and/or latency period may be entered. Dormancy or latency period The average time that a failure mode may be undetected may be entered if known. For example: Seconds, auto detected by maintenance computer 8 hours, detected by turn-around inspection 2 months, detected by scheduled maintenance block X 2 years, detected by overhaul task x Indication If the undetected failure allows the system to remain in a safe / working state, a second failure situation should be explored to determine whether or not an indication will be evident to all operators and what corrective action they may or should take. Indications to the operator should be described as follows: Normal. An indication that is evident to an operator when the system or equipment is operating normally. Abnormal. An indication that is evident to an operator when the system has malfunctioned or failed. Incorrect. An erroneous indication to an operator due to the malfunction or failure of an indicator (i.e., instruments, sensing devices, visual or audible warning devices, etc.). PERFORM DETECTION COVERAGE ANALYSIS FOR TEST PROCESSES AND MONITORING (From ARP4761 Standard): This type of analysis is useful to determine how effective various test processes are at the detection of latent and dormant faults. The method used to accomplish this involves an examination of the applicable failure modes to determine whether or not their effects are detected, and to determine the percentage of failure rate applicable to the failure modes which are detected. The possibility that the detection means may itself fail latently should be accounted for in the coverage analysis as a limiting factor (i.e., coverage cannot be more reliable than the detection means availability). Inclusion of the detection coverage in the FMEA can lead to each individual failure that would have been one effect category now being a separate effect category due to the detection coverage possibilities. Another way to include detection coverage is for the FTA to conservatively assume that no holes in coverage due to latent failure in the detection method affect detection of all failures assigned to the failure effect category of concern. The FMEA can be revised if necessary for those cases where this conservative assumption does not allow the top event probability requirements to be met. After these three basic steps the Risk level may be provided. Risk level (P×S) and (D) Risk is the combination of end effect probability and severity where probability and severity includes the effect on non-detectability (dormancy time). This may influence the end effect probability of failure or the worst case effect Severity. The exact calculation may not be easy in all cases, such as those where multiple scenarios (with multiple events) are possible and detectability / dormancy plays a crucial role (as for redundant systems). In that case fault tree analysis and/or event trees may be needed to determine exact probability and risk levels. Preliminary risk levels can be selected based on a risk matrix like shown below, based on Mil. Std. 882. The higher the risk level, the more justification and mitigation is needed to provide evidence and lower the risk to an acceptable level. High risk should be indicated to higher level management, who are responsible for final decision-making. After this step the FMEA has become like a FMECA. Timing FMEA should be used: When a product or process is being designed (or redesigned) When an existing product or process is applied in a novel way Before developing control plans or procedures for a new or redesigned process When trying to improve an existing product, process, or service When analyzing failures for an existing product, process, or service Periodically and regularly throughout the lifetime of the product, process, or service The FMEA should be updated whenever: A new cycle begins (new product/process) Changes are made to the operating conditions A change is made in the design New regulations are instituted Customer feedback indicates a problem Uses Development of system requirements that minimize the likelihood of failures. Development of designs and test systems to ensure that the failures have been eliminated or the risk is reduced to acceptable level. Development and evaluation of diagnostic systems. To help with design choices (trade-off analysis). Advantages Catalyst for teamwork and idea exchange between functions Collect information to reduce future failures, capture engineering knowledge Early identification and elimination of potential failure modes Emphasize problem prevention Fulfill legal requirements (product liability) Improve company image and competitiveness Improve production yield Improve the quality, reliability, and safety of a product/process Increase user satisfaction Maximize profit Minimize late changes and associated cost Reduce impact on company profit margin Reduce system development time and cost Reduce the possibility of same kind of failure in future Reduce the potential for warranty concerns Limitations While FMEA identifies important hazards in a system, its results may not be comprehensive and the approach has limitations. In the healthcare context, FMEA and other risk assessment methods, including SWIFT (Structured What If Technique) and retrospective approaches, have been found to have limited validity when used in isolation. Challenges around scoping and organisational boundaries appear to be a major factor in this lack of validity. If used as a top-down tool, FMEA may only identify major failure modes in a system. Fault tree analysis (FTA) is better suited for "top-down" analysis. When used as a bottom-up tool FMEA can augment or complement FTA and identify many more causes and failure modes resulting in top-level symptoms. It is not able to discover complex failure modes involving multiple failures within a subsystem, or to report expected failure intervals of particular failure modes up to the upper level subsystem or system. Additionally, the multiplication of the severity, occurrence and detection rankings may result in rank reversals, where a less serious failure mode receives a higher RPN than a more serious failure mode. The reason for this is that the rankings are ordinal scale numbers, and multiplication is not defined for ordinal numbers. The ordinal rankings only say that one ranking is better or worse than another, but not by how much. For instance, a ranking of "2" may not be twice as severe as a ranking of "1", or an "8" may not be twice as severe as a "4", but multiplication treats them as though they are. See Level of measurement for further discussion. Various solutions to this problems have been proposed, e.g., the use of fuzzy logic as an alternative to classic RPN model. In the new AIAG / VDA FMEA handbook (2019) the RPN approach was replaced by the AP (action priority). The FMEA worksheet is hard to produce, hard to understand and read, as well as hard to maintain. The use of neural network techniques to cluster and visualise failure modes were suggested starting from 2010. An alternative approach is to combine the traditional FMEA table with set of bow-tie diagrams. The diagrams provide a visualisation of the chains of cause and effect, while the FMEA table provides the detailed information about specific events. Types Functional: before design solutions are provided (or only on high level) functions can be evaluated on potential functional failure effects. General Mitigations ("design to" requirements) can be proposed to limit consequence of functional failures or limit the probability of occurrence in this early development. It is based on a functional breakdown of a system. This type may also be used for Software evaluation. Concept design / hardware: analysis of systems or subsystems in the early design concept stages to analyse the failure mechanisms and lower level functional failures, specially to different concept solutions in more detail. It may be used in trade-off studies. Detailed design / hardware: analysis of products prior to production. These are the most detailed (in MIL 1629 called Piece-Part or Hardware FMEA) FMEAs and used to identify any possible hardware (or other) failure mode up to the lowest part level. It should be based on hardware breakdown (e.g. the BoM = bill of materials). Any failure effect severity, failure prevention (mitigation), failure detection and diagnostics may be fully analyzed in this FMEA. Process: analysis of manufacturing and assembly processes. Both quality and reliability may be affected from process faults. The input for this FMEA is amongst others a work process / task breakdown. See also References Japanese business terms Lean manufacturing Reliability engineering Systems analysis Reliability analysis Quality control tools
Failure mode and effects analysis
[ "Engineering" ]
5,256
[ "Systems engineering", "Reliability analysis", "Lean manufacturing", "Reliability engineering" ]
981,643
https://en.wikipedia.org/wiki/Pulse-repetition%20frequency
The pulse-repetition frequency (PRF) is the number of pulses of a repeating signal in a specific time unit. The term is used within a number of technical disciplines, notably radar. In radar, a radio signal of a particular carrier frequency is turned on and off; the term "frequency" refers to the carrier, while the PRF refers to the number of switches. Both are measured in terms of cycle per second, or hertz. The PRF is normally much lower than the frequency. For instance, a typical World War II radar like the Type 7 GCI radar had a basic carrier frequency of 209 MHz (209 million cycles per second) and a PRF of 300 or 500 pulses per second. A related measure is the pulse width, the amount of time the transmitter is turned on during each pulse. After producing a brief pulse of radio signal, the transmitter is turned off in order for the receiver units to detect the reflections of that signal off distant targets. Since the radio signal has to travel out to the target and back again, the required inter-pulse quiet period is a function of the radar's desired range. Longer periods are required for longer range signals, requiring lower PRFs. Conversely, higher PRFs produce shorter maximum ranges, but broadcast more pulses, and thus radio energy, in a given time. This creates stronger reflections that make detection easier. Radar systems must balance these two competing requirements. Using older electronics, PRFs were generally fixed to a specific value, or might be switched among a limited set of possible values. This gives each radar system a characteristic PRF, which can be used in electronic warfare to identify the type or class of a particular platform such as a ship or aircraft, or in some cases, a particular unit. Radar warning receivers in aircraft include a library of common PRFs which can identify not only the type of radar, but in some cases the mode of operation. This allowed pilots to be warned when an SA-2 SAM battery had "locked on", for instance. Modern radar systems are generally able to smoothly change their PRF, pulse width and carrier frequency, making identification much more difficult. Sonar and lidar systems also have PRFs, as does any pulsed system. In the case of sonar, the term pulse-repetition rate (PRR) is more common, although it refers to the same concept. Introduction Electromagnetic (e.g. radio or light) waves are conceptually pure single frequency phenomena while pulses may be mathematically thought of as composed of a number of pure frequencies that sum and nullify in interactions that create a pulse train of the specific amplitudes, PRRs, base frequencies, phase characteristics, et cetera (See Fourier Analysis). The first term (PRF) is more common in device technical literature (Electrical Engineering and some sciences), and the latter (PRR) more commonly used in military-aerospace terminology (especially United States armed forces terminologies) and equipment specifications such as training and technical manuals for radar and sonar systems. The reciprocal of PRF (or PRR) is called the pulse-repetition time (PRT), pulse-repetition interval (PRI), or inter-pulse period (IPP), which is the elapsed time from the beginning of one pulse to the beginning of the next pulse. The IPP term is normally used when referring to the quantity of PRT periods to be processed digitally. Each PRT having a fixed number of range gates, but not all of them being used. For example, the APY-1 radar used 128 IPP's with a fixed 50 range gates, producing 128 Doppler filters using an FFT. The different number of range gates on each of the five PRF's all being less than 50. Within radar technology PRF is important since it determines the maximum target range (Rmax) and maximum Doppler velocity (Vmax) that can be accurately determined by the radar. Conversely, a high PRR/PRF can enhance target discrimination of nearer objects, such as a periscope or fast moving missile. This leads to use of low PRRs for search radar, and very high PRFs for fire control radars. Many dual-purpose and navigation radars—especially naval designs with variable PRRs—allow a skilled operator to adjust PRR to enhance and clarify the radar picture—for example in bad sea states where wave action generates false returns, and in general for less clutter, or perhaps a better return signal off a prominent landscape feature (e.g., a cliff). Definition Pulse-repetition frequency (PRF) is the number of times a pulsed activity occurs every second. This is similar to cycle per second used to describe other types of waveforms. PRF is inversely proportional to time period which is the property of a pulsed wave. PRF is usually associated with pulse spacing, which is the distance that the pulse travels before the next pulse occurs. Physics PRF is crucial to perform measurements for certain physics phenomenon. For example, a tachometer may use a strobe light with an adjustable PRF to measure rotational velocity. The PRF for the strobe light is adjusted upward from a low value until the rotating object appears to stand still. The PRF of the tachometer would then match the speed of the rotating object. Other types of measurements involve distance using the delay time for reflected echo pulses from light, microwaves, and sound transmissions. Measurement PRF is crucial for systems and devices that measure distance. Radar Laser range finder Sonar Different PRF allow systems to perform very different functions. A radar system uses a radio frequency electromagnetic signal reflected from a target to determine information about that target. PRF is required for radar operation. This is the rate at which transmitter pulses are sent into air or space. Range ambiguity A radar system determines range through the time delay between pulse transmission and reception by the relation: For accurate range determination a pulse must be transmitted and reflected before the next pulse is transmitted. This gives rise to the maximum unambiguous range limit: The maximum range also defines a range ambiguity for all detected targets. Because of the periodic nature of pulsed radar systems, it is impossible for some radar system to determine the difference between targets separated by integer multiples of the maximum range using a single PRF. More sophisticated radar systems avoid this problem through the use of multiple PRFs either simultaneously on different frequencies or on a single frequency with a changing PRT. The range ambiguity resolution process is used to identify true range when PRF is above this limit. Low PRF Systems using PRF below 3 kHz are considered low PRF because direct range can be measured to a distance of at least 50 km. Radar systems using low PRF typically produce unambiguous range. Unambiguous Doppler processing becomes an increasing challenge due to coherency limitations as PRF falls below 3 kHz. For example, an L-Band radar with 500 Hz pulse rate produces ambiguous velocity above 75 m/s (170 mile/hour), while detecting true range up to 300 km. This combination is appropriate for civilian aircraft radar and weather radar. Low PRF radar have reduced sensitivity in the presence of low-velocity clutter that interfere with aircraft detection near terrain. Moving target indicator is generally required for acceptable performance near terrain, but this introduces radar scalloping issues that complicate the receiver. Low PRF radar intended for aircraft and spacecraft detection are heavily degraded by weather phenomenon, which cannot be compensated using moving target indicator. Medium PRF Range and velocity can both be identified using medium PRF, but neither one can be identified directly. Medium PRF is from 3 kHz to 30 kHz, which corresponds with radar range from 5 km to 50 km. This is the ambiguous range, which is much smaller than the maximum range. Range ambiguity resolution is used to determine true range in medium PRF radar. Medium PRF is used with Pulse-Doppler radar, which is required for look-down/shoot-down capability in military systems. Doppler radar return is generally not ambiguous until velocity exceeds the speed of sound. A technique called ambiguity resolution is required to identify true range and speed. Doppler signals fall between 1.5 kHz, and 15 kHz, which is audible, so audio signals from medium-PRF radar systems can be used for passive target classification. For example, an L band radar system using a PRF of 10 kHz with a duty cycle of 3.3% can identify true range to a distance of 450 km (30 * C / 10,000 km/s). This is the instrumented range. Unambiguous velocity is 1,500 m/s (3,300 mile/hour). The unambiguous velocity of an L-Band radar using a PRF of 10 kHz would be 1,500 m/s (3,300 mile/hour) (10,000 x C / (2 x 10^9)). True velocity can be found for objects moving under 45,000 m/s if the band pass filter admits the signal (1,500/0.033). Medium PRF has unique radar scalloping issues that require redundant detection schemes. High PRF Systems using PRF above 30 kHz function better known as interrupted continuous-wave (ICW) radar because direct velocity can be measured up to 4.5 km/s at L band, but range resolution becomes more difficult. High PRF is limited to systems that require close-in performance, like proximity fuses and law enforcement radar. For example, if 30 samples are taken during the quiescent phase between transmit pulses using a 30 kHz PRF, then true range can be determined to a maximum of 150 km using 1 microsecond samples (30 x C / 30,000 km/s). Reflectors beyond this range might be detectable, but the true range cannot be identified. It becomes increasingly difficult to take multiple samples between transmit pulses at these pulse frequencies, so range measurements are limited to short distances. Sonar Sonar systems operate much like radar, except that the medium is liquid or air, and the frequency of the signal is either audio or ultra-sonic. Like radar, lower frequencies propagate relatively higher energies longer distances with less resolving ability. Higher frequencies, which damp out faster, provide increased resolution of nearby objects. Signals propagate at the speed of sound in the medium (almost always water), and maximum PRF depends upon the size of the object being examined. For example, the speed of sound in water is 1,497 m/s, and the human body is about 0.5 m thick, so the PRF for ultrasound images of the human body should be less than about 2 kHz (1,497/0.5). As another example, ocean depth is approximately 2 km, so sound takes over a second to return from the sea floor. Sonar is a very slow technology with very low PRF for this reason. Laser Light waves can be used as radar frequencies, in which case the system is known as lidar. This is short for "LIght Detection And Ranging," similar to the original meaning of the initialism "RADAR," which was RAdio Detection And Ranging. Both have since become commonly-used english words, and are therefore acronyms rather than initialisms. Laser range or other light signal frequency range finders operate just like radar at much higher frequencies. Non-laser light detection is utilized extensively in automated machine control systems (e.g. electric eyes controlling a garage door, conveyor sorting gates, etc.), and those that use pulse-rate detection and ranging are at heart, the same type of system as a radar—without the bells and whistles of the human interface. Unlike lower radio signal frequencies, light does not bend around the curve of the earth or reflect off the ionosphere like C-band search radar signals, and so lidar is useful only in line of sight applications like higher frequency radar systems. See also Radar Pulse-Doppler radar Weather radar References Radar theory Temporal rates
Pulse-repetition frequency
[ "Physics" ]
2,470
[ "Temporal quantities", "Temporal rates", "Physical quantities" ]
981,655
https://en.wikipedia.org/wiki/Integer%20square%20root
In number theory, the integer square root (isqrt) of a non-negative integer is the non-negative integer which is the greatest integer less than or equal to the square root of , For example, Introductory remark Let and be non-negative integers. Algorithms that compute (the decimal representation of) run forever on each input which is not a perfect square. Algorithms that compute do not run forever. They are nevertheless capable of computing up to any desired accuracy . Choose any and compute . For example (setting ): Compare the results with It appears that the multiplication of the input by gives an accuracy of decimal digits. To compute the (entire) decimal representation of , one can execute an infinite number of times, increasing by a factor at each pass. Assume that in the next program () the procedure is already defined and — for the sake of the argument — that all variables can hold integers of unlimited magnitude. Then will print the entire decimal representation of . // Print sqrt(y), without halting void sqrtForever(unsigned int y) { unsigned int result = isqrt(y); printf("%d.", result); // print result, followed by a decimal point while (true) // repeat forever ... { y = y * 100; // theoretical example: overflow is ignored result = isqrt(y); printf("%d", result % 10); // print last digit of result } } The conclusion is that algorithms which compute are computationally equivalent to algorithms which compute . Basic algorithms The integer square root of a non-negative integer can be defined as For example, because . Algorithm using linear search The following C programs are straightforward implementations. Linear search using addition In the program above (linear search, ascending) one can replace multiplication by addition, using the equivalence // Integer square root // (linear search, ascending) using addition unsigned int isqrt(unsigned int y) { unsigned int L = 0; unsigned int a = 1; unsigned int d = 3; while (a <= y) { a = a + d; // (a + 1) ^ 2 d = d + 2; L = L + 1; } return L; } Algorithm using binary search Linear search sequentially checks every value until it hits the smallest where . A speed-up is achieved by using binary search instead. The following C-program is an implementation. // Integer square root (using binary search) unsigned int isqrt(unsigned int y) { unsigned int L = 0; unsigned int M; unsigned int R = y + 1; while (L != R - 1) { M = (L + R) / 2; if (M * M <= y) L = M; else R = M; } return L; } Numerical example For example, if one computes using binary search, one obtains the sequence This computation takes 21 iteration steps, whereas linear search (ascending, starting from ) needs steps. Algorithm using Newton's method One way of calculating and is to use Heron's method, which is a special case of Newton's method, to find a solution for the equation , giving the iterative formula The sequence converges quadratically to as . Stopping criterion One can prove that is the largest possible number for which the stopping criterion ensures in the algorithm above. In implementations which use number formats that cannot represent all rational numbers exactly (for example, floating point), a stopping constant less than 1 should be used to protect against round-off errors. Domain of computation Although is irrational for many , the sequence contains only rational terms when is rational. Thus, with this method it is unnecessary to exit the field of rational numbers in order to calculate , a fact which has some theoretical advantages. Using only integer division For computing for very large integers n, one can use the quotient of Euclidean division for both of the division operations. This has the advantage of only using integers for each intermediate value, thus making the use of floating point representations of large numbers unnecessary. It is equivalent to using the iterative formula By using the fact that one can show that this will reach within a finite number of iterations. In the original version, one has for , and for . So in the integer version, one has and until the final solution is reached. For the final solution , one has and , so the stopping criterion is . However, is not necessarily a fixed point of the above iterative formula. Indeed, it can be shown that is a fixed point if and only if is not a perfect square. If is a perfect square, the sequence ends up in a period-two cycle between and instead of converging. Example implementation in C // Square root of integer unsigned int int_sqrt(unsigned int s) { // Zero yields zero // One yields one if (s <= 1) return s; // Initial estimate (must be too high) unsigned int x0 = s / 2; // Update unsigned int x1 = (x0 + s / x0) / 2; while (x1 < x0) // Bound check { x0 = x1; x1 = (x0 + s / x0) / 2; } return x0; } Numerical example For example, if one computes the integer square root of using the algorithm above, one obtains the sequence In total 13 iteration steps are needed. Although Heron's method converges quadratically close to the solution, less than one bit precision per iteration is gained at the beginning. This means that the choice of the initial estimate is critical for the performance of the algorithm. When a fast computation for the integer part of the binary logarithm or for the bit-length is available (like e.g. std::bit_width in C++20), one should better start at which is the least power of two bigger than . In the example of the integer square root of , , , and the resulting sequence is In this case only four iteration steps are needed. Digit-by-digit algorithm The traditional pen-and-paper algorithm for computing the square root is based on working from higher digit places to lower, and as each new digit pick the largest that will still yield a square . If stopping after the one's place, the result computed will be the integer square root. Using bitwise operations If working in base 2, the choice of digit is simplified to that between 0 (the "small candidate") and 1 (the "large candidate"), and digit manipulations can be expressed in terms of binary shift operations. With * being multiplication, << being left shift, and >> being logical right shift, a recursive algorithm to find the integer square root of any natural number is: def integer_sqrt(n: int) -> int: assert n >= 0, "sqrt works for only non-negative inputs" if n < 2: return n # Recursive call: small_cand = integer_sqrt(n >> 2) << 1 large_cand = small_cand + 1 if large_cand * large_cand > n: return small_cand else: return large_cand # equivalently: def integer_sqrt_iter(n: int) -> int: assert n >= 0, "sqrt works for only non-negative inputs" if n < 2: return n # Find the shift amount. See also [[find first set]], # shift = ceil(log2(n) * 0.5) * 2 = ceil(ffs(n) * 0.5) * 2 shift = 2 while (n >> shift) != 0: shift += 2 # Unroll the bit-setting loop. result = 0 while shift >= 0: result = result << 1 large_cand = ( result + 1 ) # Same as result ^ 1 (xor), because the last bit is always 0. if large_cand * large_cand <= n >> shift: result = large_cand shift -= 2 return result Traditional pen-and-paper presentations of the digit-by-digit algorithm include various optimizations not present in the code above, in particular the trick of pre-subtracting the square of the previous digits which makes a general multiplication step unnecessary. See for an example. Karatsuba square root algorithm The Karatsuba square root algorithm is a combination of two functions: a public function, which returns the integer square root of the input, and a recursive private function, which does the majority of the work. The public function normalizes the actual input, passes the normalized input to the private function, denormalizes the result of the private function, and returns that. The private function takes a normalized input, divides the input bits in half, passes the most-significant half of the input recursively to the private function, and performs some integer operations on the output of that recursive call and the least-significant half of the input to get the normalized output, which it returns. For big-integers of "50 to 1,000,000 digits", Burnikel-Ziegler Karatsuba division and Karatsuba multiplication are recommended by the algorithm's creator. An example algorithm for 64-bit unsigned integers is below. The algorithm: Normalizes the input inside . Calls , which requires a normalized input. Calls with the most-significant half of the normalized input's bits, which will already be normalized as the most-significant bits remain the same. Continues on recursively until there's an algorithm that's faster when the number of bits is small enough. then takes the returned integer square root and remainder to produce the correct results for the given normalized . then denormalizes the result. /// Performs a Karatsuba square root on a `u64`. pub fn u64_isqrt(mut n: u64) -> u64 { if n <= u32::MAX as u64 { // If `n` fits in a `u32`, let the `u32` function handle it. return u32_isqrt(n as u32) as u64; } else { // The normalization shift satisfies the Karatsuba square root // algorithm precondition "a₃ ≥ b/4" where a₃ is the most // significant quarter of `n`'s bits and b is the number of // values that can be represented by that quarter of the bits. // // b/4 would then be all 0s except the second most significant // bit (010...0) in binary. Since a₃ must be at least b/4, a₃'s // most significant bit or its neighbor must be a 1. Since a₃'s // most significant bits are `n`'s most significant bits, the // same applies to `n`. // // The reason to shift by an even number of bits is because an // even number of bits produces the square root shifted to the // left by half of the normalization shift: // // sqrt(n << (2 * p)) // sqrt(2.pow(2 * p) * n) // sqrt(2.pow(2 * p)) * sqrt(n) // 2.pow(p) * sqrt(n) // sqrt(n) << p // // Shifting by an odd number of bits leaves an ugly sqrt(2) // multiplied in. const EVEN_MAKING_BITMASK: u32 = !1; let normalization_shift = n.leading_zeros() & EVEN_MAKING_BITMASK; n <<= normalization_shift; let (s, _) = u64_normalized_isqrt_rem(n); let denormalization_shift = normalization_shift / 2; return s >> denormalization_shift; } } /// Performs a Karatsuba square root on a normalized `u64`, returning the square /// root and remainder. fn u64_normalized_isqrt_rem(n: u64) -> (u64, u64) { const HALF_BITS: u32 = u64::BITS >> 1; const QUARTER_BITS: u32 = u64::BITS >> 2; const LOWER_HALF_1_BITS: u64 = (1 << HALF_BITS) - 1; debug_assert!( n.leading_zeros() <= 1, "Input is not normalized: {n} has {} leading zero bits, instead of 0 or 1.", n.leading_zeros() ); let hi = (n >> HALF_BITS) as u32; let lo = n & LOWER_HALF_1_BITS; let (s_prime, r_prime) = u32_normalized_isqrt_rem(hi); let numerator = ((r_prime as u64) << QUARTER_BITS) | (lo >> QUARTER_BITS); let denominator = (s_prime as u64) << 1; let q = numerator / denominator; let u = numerator % denominator; let mut s = (s_prime << QUARTER_BITS) as u64 + q; let mut r = (u << QUARTER_BITS) | (lo & ((1 << QUARTER_BITS) - 1)); let q_squared = q * q; if r < q_squared { r += 2 * s - 1; s -= 1; } r -= q_squared; return (s, r); } In programming languages Some programming languages dedicate an explicit operation to the integer square root calculation in addition to the general case or can be extended by libraries to this end. See also Methods of computing square roots Notes References External links Number theoretic algorithms Number theory Root-finding algorithms
Integer square root
[ "Mathematics" ]
3,002
[ "Discrete mathematics", "Number theory" ]
981,728
https://en.wikipedia.org/wiki/James%20Burnett%2C%20Lord%20Monboddo
James Burnett, Lord Monboddo (baptised 25 October 1714 – 26 May 1799) was a Scottish judge, scholar of linguistic evolution, philosopher and deist. He is most famous today as a founder of modern comparative historical linguistics. In 1767 he became a judge in the Court of Session. As such, Burnett adopted an honorary title based on the name of his father's estate and family seat, Monboddo House. Monboddo was one of a number of scholars involved at the time in development of early concepts of biological evolution. Some credit him with anticipating in principle the idea of natural selection that was read by (and acknowledged in the writings of) Erasmus Darwin. Charles Darwin read the works of his grandfather Erasmus and later developed the ideas into a scientific theory. Early years James Burnett was born in 1714 at Monboddo House in Kincardineshire, Scotland. After his primary education at the parish school of Laurencekirk, he studied at Marischal College, Aberdeen, from where he was graduated in 1729. He then studied Civil Law at the University of Groningen for three years. He returned to Scotland to stay in Edinburgh in 1736 on the day of the Porteous Riots and got caught in the crowds, witnessing the lynching of Captain John Porteous on his first night in the city. He took examination in Civil Law at Edinburgh University in 1737 and was admitted to the Faculty of Advocates. Burnett married Elizabethe Farquharson and they had two daughters and a son. The younger daughter Elizabeth Burnett was an Edinburgh celebrity, known for her beauty and amiability, but who died of consumption (tuberculosis) at the age of 24. Burnett's friend the Scottish poet Robert Burns, had a romantic interest in Elizabeth and wrote a poem, "Elegy on The Late Miss Burnet of Monboddo", praising her beauty, which became her elegy. Monboddo's early work in practising law found him in a landmark piece of litigation of his time, known as the Douglas "cause," or case. The matter involved the inheritance standing of a young heir, Archibald James Edward Douglas, 1st Baron Douglas, and took on the form of a mystery novel of the era, with a complex web of events spanning Scotland, France and England. Burnett, as the solicitor for the young Douglas heir, was victorious after years of legal battle and appeals. Later years From 1754 until 1767 Monboddo was one of a number of distinguished proprietors of the Canongate Theatre. He clearly enjoyed this endeavour even when some of his fellow judges pointed out that the activity might cast a shadow over his sombre image as jurist. Here he had occasion to further associate with David Hume who was a principal actor in one of the plays. He had actually met Hume earlier when Monboddo was a curator of the Advocates Library and David Hume served as keeper of that library for several years while he wrote his history. From 1769 until 1775 John Hunter acted as his personal secretary. In the era after Monboddo was appointed to Justice of the high court, he organised "learned suppers" at his house on 13 St John Street, off the Canongate in Edinburgh's Old Town, where he discussed and lectured about his theories. Local intellectuals were invited to attend attic repasts. Henry Home, Lord Kames was conspicuously absent from such socialising; while Kames and Monboddo served on the high court at the same time and had numerous interactions, they were staunch intellectual rivals. Monboddo rode to London on horseback each year and visited Hampton Court as well as other intellectuals of the era; the King himself was fond of Monboddo's colourful discussions. Monboddo died at home 13 St John Street in the Canongate district of Edinburgh on 26 May 1799 and is buried in Greyfriars Kirkyard in Edinburgh along with his daughter Elizabeth where they have unmarked graves in the burial enclosure of Patrick Grant, Lord Elchies (within the non-public section known as the Covenanters Prison). Historical linguistics In The Origin and Progress of Language, originally published in six volumes from 1774 to 1792, Burnett analysed the structure of languages and argued that humans had evolved language skills in response to changing environments and social structures. Burnett was the first to note that some languages create lengthy words for rather simple concepts. He reasoned that in early languages there was an imperative for clarity so redundancy was built in and seemingly unnecessary syllables added. He concluded that this form of language evolved when clear communication might be the determinant of avoiding danger. Monboddo studied languages of peoples colonised by Europeans, including those of the Carib, Eskimo, Huron, Algonquian, Peruvian (Quechua?) and Tahitian peoples. He saw the preponderance of polysyllabic words, whereas some of his predecessors had dismissed these languages as a series of monosyllabic grunts. He also observed that in Huron (or Wyandot) the words for very similar objects are astoundingly different. This fact led Monboddo to perceive that these people needed to communicate reliably regarding a more limited number of subjects than in modern civilisations, which led to the polysyllabic and redundant nature of many words. He also came up with the idea that these languages are generally vowel-rich and that correspondingly, languages such as German and English are vowel-starved. According to Burnett, this disparity partially arises from the greater vocabulary of Northern European languages and the decreased need for polysyllabic content. Monboddo also popularized Marcus Zuerius van Boxhorn's 17th-century theory of a "Skythian" proto-language, traced the evolution of modern European languages and gave particularly great effort to understanding the ancient Greek language, in which he was proficient. He argued that Greek is the most perfect language ever established because of its complex structure and tonality, rendering it capable of expressing a wide gamut of nuances. Monboddo was the first to formulate what is now known as the single-origin hypothesis, the theory that all human origin was from a single region of the earth; he reached this conclusion by reasoning from linguistic evolution (Jones, 1789). This theory is evidence of his thinking on the topic of the evolution of Man. Joshua Steele's disagreement, and subsequent correspondence, with Monboddo over details of the "melody and measure of speech" resulted in Steele's Prosodia Rationalis, a foundational work both in phonetics and in the analysis of verse rhythm. Evolutionary theorist Monboddo is considered by some scholars as a precursive thinker in the theory of evolution. However, some modern evolutionary historians do not give Monboddo an equally high standing in the influence of history of evolutionary thought. "Monboddo: Scottish jurist and pioneer anthropologist who explored the origins of language and society and anticipated principles of Darwinian evolution." "With some wavering, he extended Rousseau's doctrine of the identity of species of man and the chimp into the hypothesis of common descent of all the anthropoids, and suggested by implication a general law of evolution." Lovejoy. Charles Neaves, one of Monboddo's successors on the high court of Scotland, believed that proper credit was not given to Monboddo in evolutionary theory development. Neaves wrote in verse: Though Darwin now proclaims the law And spreads it far abroad, O! The man that first the secret saw Was honest old Monboddo. The architect precedence takes Of him that bears the hod, O! So up and at them, Land of Cakes, We'll vindicate Monboddo. Erasmus Darwin notes Monboddo's work in his publications (Darwin 1803). Later writers consider Monboddo's analysis as precursive to the theory of Evolution. Whether Charles Darwin read Monboddo is not certain. Monboddo debated with Buffon regarding man's relationship to other primates. Charles Darwin did not mention Monboddo, but commented on Buffon: "the first author who in modern times has treated [evolution] in a scientific spirit was Buffon". Buffon thought that man was a species unrelated to lower primates, but Monboddo rejected Buffon's analysis and argued that the anthropoidal ape must be related to the species of man: he sometimes referred to the anthropoidal ape as the "brother of man". Monboddo suffered a setback, in his standing on evolutionary thought, because he stated at one time that men had caudal appendages (tails); some historians failed to take him very seriously after that remark, even though Monboddo was known to bait his critics with preposterous sayings. Bailey's The Holly and the Horn states that "Charles Darwin was to some degree influenced by the theories of Monboddo, who deserves the title of Evolutionist more than that of Eccentric." Henderson says: "He [Monboddo] was a minor celebrity in Edinburgh because he was considered to be very eccentric. But he actually came up with the idea that men may have evolved instead of being created by God. His views were dismissed because people thought he was mad, and in those days it was a very controversial view to hold. But he felt it was a logical possibility and it caused him a great deal of consternation. He actually did not want to believe the theory because he was a very religious person." Monboddo may be the first person to associate language skills evolving from primates and continuing to evolve in early humans (Monboddo, 1773). He wrote about how the language capability has altered over time in the form not only of skills but physical form of the sound producing organs (mouth, vocal cords, tongue, throat), suggesting he had formed the concept of evolutionary adaptive change. He also elaborates on the advantages created by the adaptive change of primates to their environment and even to the evolving complexity of primate social structures. In 1772 in a letter to James Harris, Monboddo articulated that his theory of language evolution (Harris 1772) was simply a part of the manner that man had advanced from the lower animals, a clear precedent of evolutionary thought. Furthermore, he established a detailed theory of how man adaptively acquired language to cope better with his environment and social needs. He argued that the development of language was linked to a procession of events: first developing use of tools, then social structures and finally language. This concept was quite striking for his era, because it departed from the classical religious thinking that man was created instantaneously and language revealed by God. In fact, Monboddo was deeply religious and pointed out that the creation events were probably simply allegories and did not dispute that the universe was created by God. Monboddo was a vigorous opponent of other scientific thinking that philosophically questioned the role of God (see Monboddo's prolific diatribes on Newton's theories). As an agriculturist and horse-breeder, Monboddo was quite aware of the significance of selective breeding and even transferred this breeding theory to communications he had with James Boswell in Boswell's selection of a mate. Monboddo has stated in his own works that degenerative qualities can be inherited by successive generations and that by selective choice of mates, creatures can improve the next generation in a biological sense. This suggests that Monboddo understood the role of natural processes in evolution; artificial selection was the starting-point for many of the proto-evolutionary thinkers, and for Darwin himself. Monboddo struggled with how to "get man from an animal" without divine intervention. This is typical of the kind of thinking which is called deist. He developed an entire theory of language evolution around the Egyptian civilisation to assist in his understanding of how man descended from animals, since he explained the flowering of language upon the spinoff of the Egyptians imparting language skills to other cultures. Monboddo cast early humans as wild, solitary, herbivorous quadrupeds. He believed that contemporary people suffered many diseases because they were removed from the environmental state of being unclothed and exposed to extreme swings in climate. Burnett wrote of numerous cultures (mostly based upon accounts of explorers); for example, he described "insensibles" and "wood eaters" in Of the Origin and Progress of Language. He was fascinated by the nature of these peoples' language development and also how they fit into the evolutionary scheme. Against all this, Monboddo's contribution to evolution is today regarded by historians of evolution as being notable. Bowler acknowledges his argument that apes might represent the earliest form of humanity (Monboddo 1774), but continues: "He [Monboddo] regarded humans (including savages and apes) as quite distinct from the rest of the animal kingdom. The first suggestion that the human species was descended via the apes from the lower animals did not come until Lamarck's Philosophie Zoologique of 1809." Charles Dickens knew of Monboddo and wrote in his novel, Life and Adventures of Martin Chuzzlewit about "(...) the Manboddo doctrine touching the probability of the human race having once been monkeys". This is significant because Martin Chuzzlewit was published (1843) years before Darwin's On the Origin of Species (1859). The history of the theory of evolution is a relatively modern field of scholarship. Metaphysics In Antient Metaphysics, Burnett claimed that man is gradually elevating himself from the animal condition to a state in which mind acts independently of the body. He was a strong supporter of Aristotle in his concepts of Prime Mover. Much effort was devoted to crediting Isaac Newton with brilliant discoveries in the Laws of Motion, while defending the power of the mind as outlined by Aristotle. His analysis was further complicated by his recurring need to assure that Newton did not obviate the presence of God. Nudism Monboddo was a pioneer in regard to many modern ideas and had already in the eighteenth century realized the value of "air-baths" (the familiar term which he invented) to mental and physical health. In his writings Monboddo argued against clothes as unnatural and undesirable from every point of view for both mind and body. Monboddo "awaked every morning at four, and then for his health got up and walked in his room naked, with the window open, which he called taking an air bath." When nudism was first brought into fashion with much enthusiasm in Germany as Freikörperkultur early in the twentieth century Monboddo was regarded as a pioneer, and in 1913 a Monboddo Bund was established in Berlin, for the harmonious culture of body and mind. Eccentricity Burnett was widely known to be an eccentric. Habitually he rode on horseback between Edinburgh and London instead of journeying by carriage. Another time after a decision went against him regarding the value of a horse, he refused to sit with the other judges and assumed a seat below the bench with the court clerks. When Burnett was visiting the Court of King's Bench in London in 1787, part of the floor of the courtroom started to collapse. People rushed out of the building but Burnett who, at the age of 71, was partially deaf and shortsighted, was the only one not to move. When he was later asked for a reason, he stated that he thought it was "an annual ceremony, with which, as an alien, he had nothing to do". Burnett in his earlier years suggested that the orangutan was a form of man, although some analysts think that some of his presentation was designed to entice his critics into debate. The orangutan was at this time a generic term for all types of apes. The Swedish explorer whose evidence Burnett accepted was a naval officer who had viewed a group of monkeys and thought they were human. Burnett may simply have taken the view that it was reasonable for people to assume the things they do and the word of a naval officer trained to give accurate reports was a credible source. Burnett was indeed responsible for changing the classical definition of man as a creature of reason to a creature capable of achieving reason, although he viewed this process as one slow and difficult to achieve. At one time he said that humans must have all been born with tails, which were removed by midwives at birth. His contemporaries ridiculed his views, and by 1773 he had retracted this opinion (Pringle 1773). Some later commentators have seen him as anticipating evolutionary theory. He appeared to argue that animal species adapted and changed to survive, and his observations on the progression of primates to man amounted to some kind of concept of evolution. Burnett also examined feral children and was the only thinker of his day to accept them as human rather than monsters. He viewed in these children the ability to achieve reason. He identified the orangutan as human, as his sources indicated it was capable of experiencing shame. In popular culture In Thomas Love Peacock's 1817 novel Melincourt, an orangutan punningly named "Sir Oran Haut-Ton" becomes a candidate for British Parliament based on Monboddo's theories. Charles Dickens, in his novel Martin Chuzzlewit, refers to "the Manboddo doctrine touching the probability of the human race having once been monkeys". In his 1981 dystopian novel Lanark, Alasdair Gray names the head of the mysterious Institute Lord Monboddo. He makes the connection explicit in a marginal note, adding that it is not a literal depiction. Lord Monboddo's descendant, Jamie Burnett of Leys, has sponsored a stage work Monboddo – The Musical which is a biographical re-enactment of the life of his ancestor. It received a first run at Aberdeen Arts Centre in September 2010. In her short story "The Monboddo Ape Boy", Lillian de la Torre depicted a slightly fictionalised Monboddo meeting Samuel Johnson, and being presented with a supposed "wild boy". Writings Publications Preface to "Advertisement" to John Brown, Letters upon the Poetry and Music of the Italian Opera, Addressed to a Friend (Edinburgh and London, Bell & Bradshute and C. Elliot and T. Kay, 1789) "Reports of Decisions of the Court of Session, 1738–68" in A Supplement to The Dictionary of Decisions of the Court of Session, ed. M. P. Brown (5 volumes, Edinburgh, J. Bell & W. Creech, 1826), volume 5, pp. 651–941 Correspondence James Burnett to James Harris, 31 December 1772 James Burnett to Sir John Pringle, 16 June 1773 James Burnett to James Boswell, 11 April and 28 May 1777, Yale University Boswell Papers, (C.2041 and C.2042) James Burnett to William Jones, 20 June 1789 James Burnett to T. Cadell and J. Davies, 15 May 1796, British Museum, A letter bound into Dugald Stewart, Account of the Life and Writings of William Robertson, D.D., F.R.S.E, 2nd ed., London (1802). Shelf no.1203.f.3 References Further reading 1714 births 1799 deaths 18th-century Scottish writers Age of Enlightenment Alumni of the University of Aberdeen Alumni of the University of Edinburgh British naturists Burials at Greyfriars Kirkyard Enlightenment philosophers People from Kincardine and Mearns People of the Scottish Enlightenment Proto-evolutionary biologists Scottish anthropologists Linguists from Scotland Scottish non-fiction writers Scottish philosophers Monboddo
James Burnett, Lord Monboddo
[ "Biology" ]
4,089
[ "Non-Darwinian evolution", "Biology theories", "Proto-evolutionary biologists" ]
981,746
https://en.wikipedia.org/wiki/Caret%20%28proofreading%29
The caret () is a V-shaped grapheme, usually inverted and sometimes extended, used in proofreading and typography to indicate that additional material needs to be inserted at the point indicated in the text. The same symbol is also used as a diacritical mark modifying another character (as in ), for which purpose it is known as a circumflex. Usage The caret was originally and continues to be used in handwritten form as a proofreading mark to indicate where a punctuation mark, word, or phrase should be inserted into a document. The term comes from the Latin word , "it lacks", from , "to lack; to be separated from; to be free from". The caret symbol can be written just below the line of text for a punctuation mark at low line position, such as a comma, or just above the line of text as an inverted caret () for a character at a higher line position, such as an apostrophe, or in either position to indicate insertion of a letter, word or phrase; the material to be inserted may be placed inside the caret, in the margin, or above the line. References Typographical symbols Copy editing
Caret (proofreading)
[ "Mathematics" ]
258
[ "Symbols", "Typographical symbols" ]
981,793
https://en.wikipedia.org/wiki/Ehresmann%27s%20lemma
In mathematics, or specifically, in differential topology, Ehresmann's lemma or Ehresmann's fibration theorem states that if a smooth mapping , where and are smooth manifolds, is a surjective submersion, and a proper map (in particular, this condition is always satisfied if M is compact), then it is a locally trivial fibration. This is a foundational result in differential topology due to Charles Ehresmann, and has many variants. See also Thom's first isotopy lemma References Theorems in differential topology
Ehresmann's lemma
[ "Mathematics" ]
119
[ "Theorems in differential topology", "Theorems in topology" ]
981,981
https://en.wikipedia.org/wiki/Requirements%20engineering
Requirements engineering (RE) is the process of defining, documenting, and maintaining requirements in the engineering design process. It is a common role in systems engineering and software engineering. The first use of the term requirements engineering was probably in 1964 in the conference paper "Maintenance, Maintainability, and System Requirements Engineering", but it did not come into general use until the late 1990s with the publication of an IEEE Computer Society tutorial in March 1997 and the establishment of a conference series on requirements engineering that has evolved into the International Requirements Engineering Conference. In the waterfall model, requirements engineering is presented as the first phase of the development process. Later development methods, including the Rational Unified Process (RUP) for software, assume that requirements engineering continues through a system's lifetime. Requirements management, which is a sub-function of Systems Engineering practices, is also indexed in the International Council on Systems Engineering (INCOSE) manuals. Activities The activities involved in requirements engineering vary widely, depending on the type of system being developed and the organization's specific practice(s) involved. These may include: Requirements inception or requirements elicitation – Developers and stakeholders meet; the latter are inquired concerning their needs and wants regarding the software product. Requirements analysis and negotiation – Requirements are identified (including new ones if the development is iterative), and conflicts with stakeholders are solved. Both written and graphical tools (the latter commonly used in the design phase, but some find them helpful at this stage, too) are successfully used as aids. Examples of written analysis tools: use cases and user stories. Examples of graphical tools: Unified Modeling Language (UML) and Lifecycle Modeling Language (LML). System modeling – Some engineering fields (or specific situations) require the product to be completely designed and modeled before its construction or fabrication starts. Therefore, the design phase must be performed in advance. For instance, blueprints for a building must be elaborated before any contract can be approved and signed. Many fields might derive models of the system with the LML, whereas others, might use UML. Note: In many fields, such as software engineering, most modeling activities are classified as design activities and not as requirement engineering activities. Requirements specification – Requirements are documented in a formal artifact called a Requirements Specification (RS), which will become official only after validation. A RS can contain both written and graphical (models) information if necessary. Example: Software requirements specification (SRS). Requirements validation – Checking that the documented requirements and models are consistent and meet the stakeholder's needs. Only if the final draft passes the validation process, the RS becomes official. Requirements management – Managing all the activities related to the requirements since inception, supervising as the system is developed, and even until after it is put into use (e. g., changes, extensions, etc.) These are sometimes presented as chronological stages although, in practice, there is considerable interleaving of these activities. Requirements engineering has been shown to clearly contribute to software project successes. Problems One limited study in Germany presented possible problems in implementing requirements engineering and asked respondents whether they agreed that they were actual problems. The results were not presented as being generalizable but suggested that the principal perceived problems were incomplete requirements, moving targets, and time boxing, with lesser problems being communications flaws, lack of traceability, terminological problems, and unclear responsibilities. Criticism Problem structuring, a key aspect of requirements engineering, has been speculated to reduce design performance. Some research suggests that it is possible if there are deficiencies in the requirements engineering process resulting in a situation where requirements do not exist, software requirements may be created regardless as an illusion misrepresenting design decisions as requirements See also List of requirements engineering tools Requirements analysis, requirements engineering focused in software engineering. Requirements Engineering Specialist Group (RESG) International Requirements Engineering Board (IREB) International Council on Systems Engineering (INCOSE) IEEE 12207 "Systems and software engineering – Software life cycle processes" TOGAF (Chapter 17) Concept of operations (ConOps) Operations management Software requirements Software requirements specification Software Engineering Body of Knowledge (SWEBOK) Design specification Specification (technical standard) Formal specification Software Quality Quality Management Scope Management References External links ("This standard replaces IEEE 830–1998, IEEE 1233–1998, IEEE 1362-1998 - https://standards.ieee.org/ieee/29148/5289/") Systems Engineering Body of Knowledge Requirements Engineering Management Handbook by FAA International Requirements Engineering Board (IREB) IBM Rational Resource Library by IEEE Spectrum Systems engineering Software requirements IEEE standards ISO/IEC standards
Requirements engineering
[ "Technology", "Engineering" ]
936
[ "Systems engineering", "Software requirements", "Computer standards", "Software engineering", "IEEE standards" ]
982,000
https://en.wikipedia.org/wiki/Star%20refinement
In mathematics, specifically in the study of topology and open covers of a topological space X, a star refinement is a particular kind of refinement of an open cover of X. A related concept is the notion of barycentric refinement. Star refinements are used in the definition of fully normal space and in one definition of uniform space. It is also useful for stating a characterization of paracompactness. Definitions The general definition makes sense for arbitrary coverings and does not require a topology. Let be a set and let be a covering of that is, Given a subset of the star of with respect to is the union of all the sets that intersect that is, Given a point we write instead of A covering of is a refinement of a covering of if every is contained in some The following are two special kinds of refinement. The covering is called a barycentric refinement of if for every the star is contained in some The covering is called a star refinement of if for every the star is contained in some Properties and Examples Every star refinement of a cover is a barycentric refinement of that cover. The converse is not true, but a barycentric refinement of a barycentric refinement is a star refinement. Given a metric space let be the collection of all open balls of a fixed radius The collection is a barycentric refinement of and the collection is a star refinement of See also Notes References General topology
Star refinement
[ "Mathematics" ]
318
[ "General topology", "Topology" ]
982,095
https://en.wikipedia.org/wiki/Resettable%20fuse
A resettable fuse or polymeric positive temperature coefficient device (PPTC) is a passive electronic component used to protect against overcurrent faults in electronic circuits. The device is also known as a multifuse or polyfuse or polyswitch. They are similar in function to PTC thermistors in certain situations but operate on mechanical changes instead of charge carrier effects in semiconductors. These devices were first discovered and described by Gerald Pearson at Bell Labs in 1939 and described in US patent #2,258,958. Operation A polymeric PTC device is made up of a non-conductive crystalline organic polymer matrix that is loaded with carbon black particles to make it conductive. While cold, the polymer is in a crystalline state, with the carbon forced into the regions between crystals, forming many conductive chains. Since it is conductive (the "initial resistance"), it will pass a current. If too much current is passed through the device, the device will begin to heat. As the device heats, the polymer will expand, changing from a crystalline into an amorphous state. The expansion separates the carbon particles and breaks the conductive pathways, causing the device to heat faster and expand more, further raising the resistance. This increase in resistance substantially reduces the current in the circuit. A small (leakage) current still flows through the device and is sufficient to maintain the temperature at a level which will keep it in the high resistance state. Leakage current can range from less than a hundred mA at rated voltage up to a few hundred mA at lower voltages. The device can be said to have latching functionality. The hold current is the maximum current at which the device is guaranteed not to trip. The trip current is the current at which the device is guaranteed to trip. When power is removed, the heating due to the leakage current will stop and the PPTC device will cool. As the device cools, it regains its original crystalline structure and returns to a low resistance state where it can hold the current as specified for the device. This cooling usually takes a few seconds, though a tripped device will retain a slightly higher resistance for hours, unless the power in it is weaker, or has been often used, slowly approaching the initial resistance value. The resetting will often not take place even if the fault alone has been removed with the power still flowing as the operating current may be above the holding current of the PPTC. The device may not return to its original resistance value; it will most likely stabilize at a significantly higher resistance (up to 4 times initial value). It could take hours, days, weeks or even years for the device to return to a resistance value similar to its original value, if at all. A PPTC device has a current rating and a voltage rating. Applications These devices are often used in computer power supplies, largely due to the PC 97 standard (which recommends a sealed PC that the user never has to open), and in aerospace/nuclear applications where replacement is difficult. Another application for such devices is protecting audio loudspeakers, particularly tweeters, from damage when over driven: by putting a resistor or light bulb in parallel with the PPTC device it is possible to design a circuit that limits total current through the tweeter to a safe value instead of cutting it off, allowing the speaker to continue operating without damage when the amplifier is delivering more power than the tweeter could tolerate. While a fuse could also offer similar protection, if the fuse is blown, the tweeter cannot operate until the fuse is replaced. See also Positive temperature coefficient References Over-current protection devices Resistive components
Resettable fuse
[ "Physics" ]
747
[ "Resistive components", "Physical quantities", "Electrical resistance and conductance" ]
982,231
https://en.wikipedia.org/wiki/Rubble%20trench%20foundation
The rubble trench foundation, an ancient construction approach popularized by architect Frank Lloyd Wright, is a type of foundation that uses loose stone or rubble to minimize the use of concrete and improve drainage. It is considered more environmentally friendly than other types of foundation because cement manufacturing requires the use of enormous amounts of energy. However, some soil environments are not suitable for this kind of foundation, particularly expansive or poor load-bearing (< 1 ton/sf) soils. A rubble trench foundation with a concrete grade beam is not recommended for earthquake prone areas. A foundation must bear the structural loads imposed upon it and allow proper drainage of ground water to prevent expansion or weakening of soils and frost heaving. While the far more common concrete foundation requires separate measures to ensure good soil drainage, the rubble trench foundation serves both foundation functions at once. To construct a rubble trench foundation a narrow trench is dug down below the frost line. The bottom of the trench would ideally be gently sloped to an outlet. Drainage tile, graded 1":8' to daylight, is then placed at the bottom of the trench in a bed of washed stone protected by filter fabric. The trench is then filled with either screened stone (typically 1-1/2") or recycled rubble. A steel-reinforced concrete grade beam may be poured at the surface to provide ground clearance for the structure. If an insulated slab is to be poured inside the grade beam, then the outer surface of the grade beam and the rubble trench should be insulated with rigid XPS foam board, which must be protected above grade from mechanical and UV degradation. The rubble-trench foundation is a relatively simple, inexpensive, and environment-friendly alternative to a conventional foundation, but may require an engineer's approval if building officials are not familiar with it. Frank Lloyd Wright used them successfully for more than 50 years in the first half of the 20th century, and there is a revival of this style of foundation with the increased interest in green building. References Shallow foundations Sustainable building Civil engineering
Rubble trench foundation
[ "Engineering" ]
404
[ "Construction", "Sustainable building", "Civil engineering", "Building engineering" ]
982,249
https://en.wikipedia.org/wiki/Environmental%20archaeology
Environmental archaeology is a sub-field of archaeology which emerged in 1970s and is the science of reconstructing the relationships between past societies and the environments they lived in. The field represents an archaeological-palaeoecological approach to studying the palaeoenvironment through the methods of human palaeoecology and other geosciences. Reconstructing past environments and past peoples' relationships and interactions with the landscapes they inhabited provide archaeologists with insights into the origins and evolution of anthropogenic environments and human systems. This includes subjects such as including prehistoric lifestyle adaptations to change and economic practices. Environmental archaeology is commonly divided into three sub-fields: archaeobotany (the study of plant remains) zooarchaeology (the study of faunal remains) geoarchaeology (the study of geological processes and their relationship to the archaeological record) Environmental archaeology often involves studying plant and animal remains in order to investigate which plant and animal species were present at the time of prehistoric habitations, and how past societies managed them. It may also involve studying the physical environment and how similar or different it was in the past compared to the present day. An important component of such analyses represents the study of site formation processes. This field is particularly useful when artifacts may be absent from an excavated or surveyed site, or in cases of earth movement, such as erosion, which may have buried artifacts and archaeological features. While specialist sub-fields, for example bioarchaeology or geomorphology, are defined by the materials they study, the term "environmental" is used as a general template in order to denote a general field of scientific inquiry that is applicable across time periods and geographical regions studied by archaeology as a whole. Subfields Archaeobotany Archaeobotany is the study and interpretation of plant remains. By determining the uses of plants in historical contexts, researchers can reconstruct the diets of past humans, as well as determine their Subsistence economy strategies and plant economy. This provides greater insight into a people's social and cultural behaviors. Analysis of specimen like wood charcoal, for example, can reveal the source of fuel or construction for a society. Archaeobotanists also often study seed and fruit remains, along with pollen and starch. Plants can be preserved in a variety of ways, but the most common are carbonization, water logging, mineralization, and desiccation. A field within archaeobotany is ethnobotany, which looks more specifically at the relationship between plants and humans, and the cultural impacts plants have had and continue to have on human societies. Plant usage as food and as crops or as medicine is of interest, as well the plants' economic influences. Zooarchaeology Zooarchaeology is the study of animal remains and what these remains can tell us about the human societies the animals existed among. Animal remains can provide evidence of predation by humans (or vice versa) or domestication. Despite revealing the specific relationships between animals and humans, discovery of animal bones, hides, or DNA in a certain area can describe the location's past landscape or climate. Geoarchaeology Geoarchaeology is the study of landscape and of geological processes. It looks at environments within the human timeline to determine how past societies may have influenced or been influenced by the environment. Sediment and soil are often studied because this is where the majority of artifacts are found, but also because natural processes and human behavior can alter the soil and reveal its history. Apart from visual observation, computer programming and satellite imaging are often employed to reconstruct past landscapes or architecture. Other related fields Due to the multidisciplinary nature of archaeology in general, the range of research in the earth and environmental sciences, and possibility of methodologies, environmental archaeology has branched and connected with a number of other fields to include numerous cross and subfields such as: archaeoentomology bioarchaeology and human ecology Quaternary research as well as fields related to various chronological dating techniques; including radiocarbon dating, luminescence dating, and archaeomagnetic dating. History Environmental archaeology has emerged as a distinct discipline since the second half of the 20th century. In recent years it has grown rapidly in significance and is now an established component of most excavation projects. The field is multidisciplinary, and environmental archaeologists, as well as palaeoecologists, work side by side with archaeologists and anthropologists specialising in material culture studies in order to achieve a more holistic understanding of past human livelihood and people-environment interactions, especially how climatic stress affected humans and forced them to adapt. In archaeology in the 1960s, the environment was seen as having a "passive" interaction with humans. With the inclusion of Darwinism and ecological principles, however, this paradigm began to shift. Prominent theories and principles of the time (oasis theory, catastrophism, and longue duree) emphasized this philosophy. Catastrophism, for instance, discussed how catastrophes like natural disaster could be the determining factor in a society's survival. The environment could have social, political, and economic impacts on human communities. It became more important for researchers to look at the direct influence the environment could have on a society. This gave rise to middle range theory and the major questions asked by environmental archaeology in the 20th and 21st centuries. Research has since led environmental archaeology to two major conclusions: humanity originated in Africa and agriculture originated in south-west Asia. Another important shift in thinking within the field centered around the notion of cost-effectivity. Before, archaeologists thought that humans usually acted to maximize their use of resources, but have since come to believe that this is not the case. Subsequent theories/principles include sociality and agency, and the focus on relationships between archaeological sites. Government research audits and the 'commercialisation' of environmental archaeology have also shaped the sub-discipline in more recent times. Methods Environmental archaeologists approach a site through evaluation and/or excavation. Evaluation seeks to analyze the resources and artifacts given in an area and their potential significance. Excavation takes samples from different layers in the ground and uses a similar strategy to evaluation. The samples typically sought after are human and faunal remains, pollen and spores, wood and charcoal, insects, and even isotopes. Biomolecules like lipids, proteins, and DNA can be revelatory samples. With respect to geoarchaeology, computer systems for topography and satellites imaging are often used to reconstruct landscapes. The Geographic Information System (GIS) is a computer system that can process spatial data and construct virtual landscapes. Climate records are able to be reconstructed through paleoclimatology proxies, which can provide information on temperatures, precipitation, vegetation, and other climate-dependent conditions. These proxies can be used to provide context for present climate and compare past climate against the present. Significance Each focus within environmental archaeology collects information about a different aspect of humans' relation with their surrounding environment. Together these components (along with methods from other fields) are combined to fully understand a past society's lifestyle and interactions with their environment. Past aspects of land use, food production, tool use, and occupation patterns can all be established and the knowledge applied to current and future human-environment interactions. Through predation, agriculture, and introduction of foreign biota into new environments, humans have altered past environments. Understanding these past processes can help us pursue conservation and restorative processes in the present. Environmental archaeology provides insight on sustainability and why some cultures collapsed while others survived. Societal collapse has occurred many times throughout history, one of the most prominent examples being the Maya civilization. Using lake sediment core and climate reconstruction technology discussed earlier, archaeologists were able to reconstruct the climate present at the time of the Mayans. Although the Yucatán Peninsula was found to have extreme drought at the time Mayan society collapsed, many other factors contributed to their demise. Deforestation, overpopulation, and manipulating wetlands are only a few theories as to why the Maya civilization collapsed, but all of these worked in tandem to negatively impact the environment. From a sustainability perspective, studying how the Mayans impacted the environment allows researchers to see how these changes have permanently affected the landscape and subsequent populations living in the area. Archaeologists are increasingly under pressure to demonstrate that their work has impact beyond the discipline. This has prompted environmental archaeologists to argue that an understanding of past environmental changes is essential to model future outcomes in areas such as climate change, land cover change, soil health and food security. Notable contributors John Birks, a botanist and emeritus professor at the University of Bergen and University College London, is renowned for his novel quantitative techniques in Quaternary palaeoecology. His extensive research focuses on the vegetational and environmental history of the past 10-20,000 years across various regions, including Fennoscandia, the United Kingdom, Minnesota, the Yukon, Siberia, and Tibet. Don Brothwell (1933–2016), was a distinguished British archaeologist and anthropologist, who specialised in human palaeoecology and environmental archaeology. His career spanned several prominent institutions including the University of Cambridge, the British Museum, and the University of London, the University of York. Brothwell was celebrated as a pioneer in archaeological science, contributing extensively to the study of human and animal remains, and founding the Journal of Archaeological Science. Karl Butzer is a notable pioneer of environmental archaeology and has won numerous awards and conducted research in the fields of archaeology, geography, and geology. Eric Higgs researched the development of agriculture in Asia and the method of "site catchment analysis", which looks at the exploitation of land based on the land's potential. Douglas Kennett is a controversial environmental archaeologist and human behavioral ecologist known for his work investigating how climate change affected Maya civilization in its development and disintegration. and for his contributions as a member of the Comet Research Group to the controversial and disputed Younger Dryas impact hypothesis which asserts that the Clovis culture was destroyed by a shower of comets. His most widely disseminated paper was a collaboration with biblical archaeologists who believe they have discovered the ancient city of Sodom at Tell el-Hammam, Jordon, and that it was destroyed by a comet. On February 15, 2023, the following editor’s note was posted on this paper, "Readers are alerted that concerns raised about the data presented and the conclusions of this article are being considered by the Editors. A further editorial response will follow the resolution of these issues." Louis Leakey contributed to a vast amount of research in this field. Leakey and his wife Mary Leakey are most known for their work on human origins in Africa. Lewis Binford developed the middle range theory. Under this theory, researchers study the relationship between humans and the environment, which can be depicted in models. Cathy Whitlock, an Earth Scientist and Professor at Montana State University, specializing in Quaternary environmental change and palaeoclimatology. She is most known for her research on "palaeofire," which uses sediment cores to reconstruct historical vegetation, fire, and climate patterns (particularly post-Yellowstone fires). Whitlock has significantly contributed to academic understandings of climate dynamics and encouraging informed conservation efforts and was elected to the National Academy of Sciences in 2018. Janet Wilmshurst, a New Zealand-based palaeoecologist who has researched fossil records to examine relationships between human settlements and natural disturbances, including novel methods of carbon-dated rat-gnawed seeds to trace Polynesian settlements over time. See also Archaeology of Hatfield and Thorne Archaeogeography Area of archaeological potential Bioanthropology Diatoms Digital Archaeology Disturbance (archaeology) Earth system science Environmental science Harris matrix Geoscience Geography (see also: Human Geography and Physical Geography) GIS in archaeology Macrofossil Microfossil Stratigraphy (archaeology) Subfields of archaeology Systems theory in archaeology References External links Association for Environmental Archaeology Historic England - Environmental Archaeology Journal of Human Palaeoecology Archaeology Data Service - Environmental Archaeology Bibliography Environmental Archaeology - Theory and Practice: Looking Back, Moving Forwards (Open Access) Archaeological science Environmental science
Environmental archaeology
[ "Environmental_science" ]
2,484
[ "nan" ]
982,386
https://en.wikipedia.org/wiki/Killing%20form
In mathematics, the Killing form, named after Wilhelm Killing, is a symmetric bilinear form that plays a basic role in the theories of Lie groups and Lie algebras. Cartan's criteria (criterion of solvability and criterion of semisimplicity) show that Killing form has a close relationship to the semisimplicity of the Lie algebras. History and name The Killing form was essentially introduced into Lie algebra theory by in his thesis. In a historical survey of Lie theory, has described how the term "Killing form" first occurred in 1951 during one of his own reports for the Séminaire Bourbaki; it arose as a misnomer, since the form had previously been used by Lie theorists, without a name attached. Some other authors now employ the term "Cartan-Killing form". At the end of the 19th century, Killing had noted that the coefficients of the characteristic equation of a regular semisimple element of a Lie algebra are invariant under the adjoint group, from which it follows that the Killing form (i.e. the degree 2 coefficient) is invariant, but he did not make much use of the fact. A basic result that Cartan made use of was Cartan's criterion, which states that the Killing form is non-degenerate if and only if the Lie algebra is a direct sum of simple Lie algebras. Definition Consider a Lie algebra over a field . Every element of defines the adjoint endomorphism (also written as ) of with the help of the Lie bracket, as Now, supposing is of finite dimension, the trace of the composition of two such endomorphisms defines a symmetric bilinear form with values in , the Killing form on . Properties The following properties follow as theorems from the above definition. The Killing form is bilinear and symmetric. The Killing form is an invariant form, as are all other forms obtained from Casimir operators. The derivation of Casimir operators vanishes; for the Killing form, this vanishing can be written as where [ , ] is the Lie bracket. If is a simple Lie algebra then any invariant symmetric bilinear form on is a scalar multiple of the Killing form. The Killing form is also invariant under automorphisms of the algebra , that is, for in . The Cartan criterion states that a Lie algebra is semisimple if and only if the Killing form is non-degenerate. The Killing form of a nilpotent Lie algebra is identically zero. If are two ideals in a Lie algebra with zero intersection, then and are orthogonal subspaces with respect to the Killing form. The orthogonal complement with respect to of an ideal is again an ideal. If a given Lie algebra is a direct sum of its ideals , then the Killing form of is the direct sum of the Killing forms of the individual summands. Matrix elements Given a basis of the Lie algebra , the matrix elements of the Killing form are given by Here in Einstein summation notation, where the are the structure coefficients of the Lie algebra. The index functions as column index and the index as row index in the matrix . Taking the trace amounts to putting and summing, and so we can write The Killing form is the simplest 2-tensor that can be formed from the structure constants. The form itself is then In the above indexed definition, we are careful to distinguish upper and lower indices (co- and contra-variant indices). This is because, in many cases, the Killing form can be used as a metric tensor on a manifold, in which case the distinction becomes an important one for the transformation properties of tensors. When the Lie algebra is semisimple over a zero-characteristic field, its Killing form is nondegenerate, and hence can be used as a metric tensor to raise and lower indexes. In this case, it is always possible to choose a basis for such that the structure constants with all upper indices are completely antisymmetric. The Killing form for some Lie algebras are (for in viewed in their fundamental matrix representation): The table shows that the Dynkin index for the adjoint representation is equal to twice the dual Coxeter number. Connection with real forms Suppose that is a semisimple Lie algebra over the field of real numbers . By Cartan's criterion, the Killing form is nondegenerate, and can be diagonalized in a suitable basis with the diagonal entries . By Sylvester's law of inertia, the number of positive entries is an invariant of the bilinear form, i.e. it does not depend on the choice of the diagonalizing basis, and is called the index of the Lie algebra . This is a number between and the dimension of which is an important invariant of the real Lie algebra. In particular, a real Lie algebra is called compact if the Killing form is negative definite (or negative semidefinite if the Lie algebra is not semisimple). Note that this is one of two inequivalent definitions commonly used for compactness of a Lie algebra; the other states that a Lie algebra is compact if it corresponds to a compact Lie group. The definition of compactness in terms of negative definiteness of the Killing form is more restrictive, since using this definition it can be shown that under the Lie correspondence, compact Lie algebras correspond to compact semisimple Lie groups. If is a semisimple Lie algebra over the complex numbers, then there are several non-isomorphic real Lie algebras whose complexification is , which are called its real forms. It turns out that every complex semisimple Lie algebra admits a unique (up to isomorphism) compact real form . The real forms of a given complex semisimple Lie algebra are frequently labeled by the positive index of inertia of their Killing form. For example, the complex special linear algebra has two real forms, the real special linear algebra, denoted , and the special unitary algebra, denoted . The first one is noncompact, the so-called split real form, and its Killing form has signature . The second one is the compact real form and its Killing form is negative definite, i.e. has signature . The corresponding Lie groups are the noncompact group of real matrices with the unit determinant and the special unitary group , which is compact. Trace forms Let be a finite-dimensional Lie algebra over the field , and be a Lie algebra representation. Let be the trace functional on . Then we can define the trace form for the representation as Then the Killing form is the special case that the representation is the adjoint representation, . It is easy to show that this is symmetric, bilinear and invariant for any representation . If furthermore is simple and is irreducible, then it can be shown where is the index of the representation. See also Casimir invariant Killing vector field Citations References Lie groups Lie algebras
Killing form
[ "Mathematics" ]
1,415
[ "Lie groups", "Mathematical structures", "Algebraic structures" ]
982,540
https://en.wikipedia.org/wiki/Taqi%20ad-Din%20Muhammad%20ibn%20Ma%27ruf
Taqi ad-Din Muhammad ibn Ma'ruf ash-Shami al-Asadi (; ; ‎ 1526–1585) was an Ottoman polymath active in Cairo and Istanbul. He was the author of more than ninety books on a wide variety of subjects, including astronomy, clocks, engineering, mathematics, mechanics, optics, and natural philosophy. In 1574 the Ottoman Sultan Murad III invited Taqi ad-Din to build an observatory in the Ottoman capital, Istanbul. Taqi ad-Din constructed instruments such as an armillary sphere and mechanical clocks that he used to observe the Great Comet of 1577. He also used European celestial and terrestrial globes that were delivered to Istanbul in gift exchanges. His major work from the use of his observatory is titled "The tree of ultimate knowledge [in the end of time or the world] in the Kingdom of the Revolving Spheres: The astronomical tables of the King of Kings [Murad III]" (Sidrat al-muntah al-afkar fi malkūt al-falak al-dawār– al-zij al-Shāhinshāhi). The work was prepared according to the results of the observations carried out in Egypt and Istanbul in order to correct and complete Ulugh Beg's 15th century work, the Zij-i Sultani. The first 40 pages of the work dealt with calculations, followed by discussions of astronomical clocks, heavenly circles, and information on three eclipses which he observed in Cairo and Istanbul. As a polymath, Taqi al-Din wrote numerous books on astronomy, mathematics, mechanics, and theology. His method of finding coordinates of stars were reportedly so precise that he got better measurements than his contemporaries, Tycho Brahe and Nicolas Copernicus. Brahe is also thought to have been aware of Taqi al-Din's work. Taqi ad-Din also described a steam turbine with the practical application of rotating a spit in 1551. He worked on and created astronomical clocks for his observatory. Taqi ad-Din also wrote a book on optics, in which he determined the light emitted from objects, proved the Law of Reflection observationally, and worked on refraction. Biography Taqī al-Dīn was born in Damascus in 1526 according to most sources. His ethnicity has been described as Arab, Kurdish and Turkish. In his treatise, titled "Rayḥānat al-rūḥ", Taqī al-Dīn himself claimed descent from the Ayyubids tracing his lineage back to the Ayyubid prince Nasir al-Din Mankarus ibn Nasih al-Din Khumartekin who ruled Abu Qubays in Syria during the 12th century. The Encyclopaedia of Islam makes no mention of his ethnicity, simply calling him, "...the most important astronomer of Ottoman Turkey". Taqi ad-Din's education started in theology and as he went on he would gain an interest in the rational sciences. Following his interest, he would begin to study the rational sciences in Damascus and Cairo. During that time he studied alongside his father Maʿruf Efendi. Al-Dīn went on to teach at various madaris and served as a qadi, or judge, in Palestine, Damascus, and Cairo. He stayed in Egypt and Damascus for some time and while he was there he created work in astronomy and mathematics. His work in these categories would eventually become important. He became a chief astronomer to the Sultan in 1571 a year after he came to Istanbul, replacing Mustafa ibn Ali al-Muwaqqit. Taqī al-Dīn maintained a strong bond with the people from the Ulama and statesmen. He would pass on information to Sultan Murad III who had an interest in astronomy but also in astrology. The information stated that Ulugh Beg Zij had particular observational errors. Al-Dīn made a suggestions that those errors could be fixed if there were new observations made. He also suggested that an observatory should be created in Istanbul to make that situation easier. Murad III would become a patron of the first observatory in Istanbul. He preferred that construction for the new observatory begin immediately. Since Murad III was the patron he would assist with finances for the project. Taqī al-Dīn continued his studies at the Galata Tower while this was going on. His studies would continue until 1577 at the nearly complete observatory, which was called Dar al-Rasad al-Jadid. This new observatory contained a library that held books which covered astronomy and mathematics. The observatory, built in the higher part of Tophane in Istanbul, was made of two separate buildings. One building was big and the other one was small. Al-Dīn possessed some of the instruments used in the old Islamic observatories. He had those instruments reproduced and also created new instruments which would be used for observational purposes. The staff at the new observatory consisted of sixteen people. Eight of them were observers or rasids, four of them were clerks, and the last four were assistants. Taqī al-Dīn approached his observations in a creative way and created new answers to astronomical problems due to the new strategies he created along with the new equipment he created as well. He would go on to create trigonometric tables based on decimal fractions. These tables placed the ecliptic at 23° 28' 40". The current value was 23° 27' showing that al-Dīn's instruments and methods were more precise. Al-Dīn used a new method to calculate solar parameters and to determine the magnitude of the annual movement of the sun's apogee as 63 seconds. The known value today is 61 seconds. Copernicus came up with 24 seconds and Tycho Brahe had 45 seconds but al-Dīn was more accurate than both. The main purpose behind the observatory was to cater to the needs of the astronomers and provide a library and workshop so they could design and produce instruments. This observatory would become one of the largest ones in the Islamic world. It was complete in 1579. It would go on to run until January 22, 1580 which is when it was destroyed. Some say religious arguments was the reason why it was destroyed, but it really came down to political problems. A report by the grand vizier Sinan Pasha to Sultan Murad III goes into how the Sultan and the vizier attempted to keep Taqī Ad-Dīn away from the ulama because it seemed like they wanted to take him to trial for heresy. The vizier informs the sultan that Taqī Ad-Dīn wanted to go to Syria regardless of the sultan's orders. The vizier also warned the sultan that if Taqī Ad-Dīn went there, there is a possibility that he would be noticed by the ulama who would take him to trial. Despite Taqī al-Dīn's originality, his influence seemed to be limited. There are only a small number of surviving copies of his works so they were not able to reach a wide variety of people. His commentaries that are known are very few. However, one of his works and a piece of a library that he owned reached western Europe pretty quickly. This was due to the manuscript collecting efforts of Jacob Golius, a Dutch professor of Arabic and mathematics at Leiden University. Golius traveled to Istanbul in the early seventeenth century. In 1629 he wrote a letter to Constantijn Huygens that talks about seeing Taqī Ad-Dīn's work on optics in Istanbul. He argued that he was not able to get ahold of it from his friends even after all his efforts. He must have succeeded in acquiring it later since Taqī al-Dīn's work on optics would eventually make it to the Bodleian Library as Marsh 119. It was originally in the Golius collection so it is clear that Golius eventually succeeded at acquiring it. According to Salomon Schweigger, the chaplain of Habsburg ambassador Johann Joachim von Sinzendorf, Taqi al-Din was a charlatan who deceived Sultan Murad III and had him spent enormous resources. At the age of 59, after authoring more than ninety books, Taqī al-Dīn passed away in 1585. The Constantinople Observatory Taqī al-Dīn was both the founder and director of the Constantinople Observatory, which is also known as the Istanbul Observatory. This observatory is frequently said to be one of Taqī al-Dīn's most important contributions to sixteenth-century Islamic and Ottoman astronomy. In fact, it is known as one of the largest observatories in Islamic history. It is often compared to Tycho Brahe's Uraniborg Observatory, which was said to have been the home to the best instruments of its time in Europe. As a matter of fact, Brahe and Taqī al-Dīn have frequently been compared for their work in sixteenth-century astronomy. The founding of the Constantinople Observatory began when Taqī al-Dīn returned to Istanbul in 1570, after spending 20 years in Egypt developing his astronomy and mathematical knowledge. Shortly after his return, Sultan Selīm II appointed Taqī al-Dīn as the head astronomer (Müneccimbaşı), following the death of the previous head astronomer Muṣṭafā ibn ҁAlī al-Muwaqqit in 1571. During the early years of his position as head astronomer, Taqī al-Dīn worked in both the Galata Tower and a building overlooking Tophane. While working in these buildings, he began to gain the support and trust of many important Turkish officials. These newfound relationships lead to an imperial edict in 1569 from Sultan Murad III, which called for the construction of the Constantinople Observatory. This observatory became home to many important books and instruments, it had sixteen assistants who helped with the making of scientific instruments, as well as many renowned scholars of the time. While there is not much known of the architectural characteristics of the building, there are many depictions of the scholars and astronomical instruments present in the observatory. It was from this observatory that Taqī al-Dīn discovered the Great Comet of 1577, Murad III taught of the comet as a bad omen on the war with the Safavids (he also blamed Taqī al-Dīn for the plague that was occurring at the time). Due to political conflict, this observatory was short lived. It was closed in 1579 and, was demolished entirely by the state on 22 January 1580, only 11 short years after the imperial edict which called for its construction. Politics The rise and fall of Taqī al-Dīn and his observatory depended on political issues that surrounded him. Due to his father's occupation as a professor at the Damascene College of law Taqī al-Dīn spent much of his life in Syria and Egypt. During his trips to Istanbul he was able to make connections with many scholar-jurists. He was also able to use the private library of the Grand Vizier of the time, Semiz Ali Pasha. He then began working under Sultan Murad III's new Grand Vizier's, private mentor Sokollu. Continuing his research on observations of the heavens while in Egypt Taqī al-Dīn used the Galata tower and Sokollu's private residence. Although Murad III was the one who commanded an observatory to be built it was actually Sokollu who brought the idea to him knowing about his interest in science. The Sultan ultimately would provide Taqī al-Dīn with everything he needed from financial assistance for the physical buildings, to intellectual assistance making sure he had easy access to many types of books he would need. When the Sultan decided to create the observatory he saw it as a way to show off the power his monarchy had besides just financially backing it. Murad III showed his power by bringing Taqī al-Dīn and some of the most accomplished men in the field of astronomy together to work towards one goal and not only have them work well together but also make progress in the field. Murad III made sure that there was proof of his accomplishments by having his court historiographer Seyyid Lokman keep very detailed records of the work going on at the observatory. Seyyid Lokman wrote that his sultan's monarchy was much more powerful than others in Iraq, Persia, and Anatolia. He also claimed that Murad III was above other monarchs because the results of the observatory were new to the world and replaced many others. Instruments used at the Observatory Taqī al-Dīn used a variety of instruments to aid in his work at the observatory. Some were instruments that were already in use from European Astronomers while others he invented himself. While working in this observatory, Taqī al-Dīn not only operated many previously created instruments and techniques, but he also developed numerous new ones. Of these novel inventions, the automatic-mechanical clock is regarded as one of the most important developed in the Constantinople Observatory. Each of these instruments were first described by Ptolemy. An Armillary Sphere- A model of celestial bodies with rings that represent longitude and latitude. A Paralactic Ruler- also known as a Triquetrum was used to calculate the altitudes of celestial bodies. An Astrolabe- Measures the inclined position of celestial bodies. These instruments were created by Muslim astronomers. A Mural quadrant, a type of mural Instrument for measuring angles from 0 to 90 degrees. An Azimuthally Quadrant Each of the instruments were created by Taqi al-Din to use for his own work.A Parallel rulerA Ruler Quadrant or Wooden Quadrant an instrument with two holes for the measurement of apparent diameters and eclipses.A mechanical clock with a train of cogwheels which helped measure the true ascension of the stars.Muşabbaha bi'l-menatık, an instrument with chords to determine the equinoxes, invented to replace the equinoctial armillary. A Sunaydi Ruler which was apparently a special type of instrument of an auxiliary nature, the function of which was explained by Alaeddin el-Mansur Contributions Clock mechanics Rise of clock use in the Ottoman Empire Before the sixteenth century European mechanical clocks were not in high demand. This lack of demand was brought on by the extremely high prices and the lack of preciseness needed by the population who had to calculate when they would have to have the prayer. The use of hourglasses, water clocks, and sundials was more than enough to meet their needs. It was not until around 1547 that the Ottomans started creating a high demand for them. Initially, it was started by the gifts brought by the Austrians but this would end up starting a market for the clocks. European clockmakers began to create clocks designed to the tastes and needs of the Ottoman people. They did this by showing both the phases of the moon and by utilizing Ottoman numbers. Taqī al-Dīn's work Due to this high demand for mechanical clocks, Taqī al-Dīn was asked by the Grand Vizier to create a clock that would show exactly when the call to prayer was. This would lead him to write his first book on the construction of mechanical clocks called, "al-Kawakib al-Durriya fi Bengamat al-Dawriyya" (The Brightest Stars for the Construction of Mechanical Clocks) in 1563 A.D. which he used throughout his research at the short-lived observatory. He believed that it would be advantageous to bring a "true hermetic and distilled perception of the motion of the heavenly bodies." In order to get a better understanding of how clocks ran Taqī al-Dīn took the time to gain knowledge from many European clock makers as well as going into the treasury of Semiz Ali Pasha and learning anything he could from the many clocks he owned. Types of clocks examined Of the clocks in the Grand Vizier's treasury Taqī al-Dīn examined three different types. Those three were weight-driven, spring-driven, and driven by lever escapement. He wrote of these three types of watches but also made comments on pocket watches and astronomical ones. As Chief Astronomer, Taqī al-Dīn created a mechanical astronomical clock. This clock was made to permit more precise measurements at the Constantinople observatory. As stated above the creation of this clock was thought to be one of the most important astronomical discoveries of the sixteenth century. Taqī al-Dīn constructed a mechanical clock with three dials which show the hours, minutes, and seconds, with each minute consisting of five seconds. After this clock it is not known whether Taqī al-Dīn's work in mechanical clocks was ever continued, given that much of the clockmaking after that time in the Ottoman Empire was taken over by Europeans. Steam In 1551 Taqī al-Dīn described a self-rotating spit that is important in the history of the steam turbine. In Al-Turuq al-samiyya fi al-alat al-ruhaniyya (The Sublime Methods of Spiritual Machines) al-Dīn describes this machine as well as some practical applications for it. The spit is rotated by directing steam into the vanes which then turns the wheel at the end of the axle. Al-Dīn also described four water-raising machines. The first two are animal driven water pumps. The third and fourth are both driven by a paddle wheel. The third is a slot-rod pump while the fourth is a six-cylinder pump. The vertical pistons of the final machine are operated by cams and trip-hammers, run by the paddle wheel. The descriptions of these machines predates many of the more modern engines. The screw pump, for example, that al-Dīn describes predates Agricola, whose description of the rag and chain pump was published in 1556. The two pump engine, which was first described by al-Jazarī, was also the basis of the steam engine. Important works Astronomy Sidrat muntahā al-afkār fī malakūt al-falak al-dawwār (al-Zīj al-Shāhinshāhī): this is said to be one of Taqī al-Dīn's most important works in astronomy. He completed this book on the basis of his observations in both Egypt and Istanbul. The purpose of this work was to improve, correct, and ultimately complete Zīj-i Ulugh Beg, which was a project devised in Samarkand and furthered in the Constantinople Observatory. The first 40 pages of his writing focus on trigonometric calculations, with emphasis on trigonometric functions such sine, cosine, tangent, and cotangent. Jarīdat al-durar wa kharīdat al-fikar is a zīj that is said to be Taqī al-Dīn's second most important work in astronomy. This zīj contains the first recorded use of decimal fractions and trigonometric functions in astronomical tables. He also gives the parts of degree of curves and angles in decimal fractions with precise calculations. Dustūr al-tarjīḥ li-qawā ҁ id al-tasṭīḥ is another important work by Taqī al-Dīn, which focuses on the projection of a sphere into a plane, among other geometric topics. Taqī al-Din is also accredited as the author of Rayḥānat al-rūḥ fī rasm al-sā ҁ āt ҁ alā mustawī al-suṭūḥ, which discusses sundials and their characteristics drawn on a marble surface. Clocks and mechanics al-Kawākib al-durriyya fī waḍ ҁ al-bankāmāt al-dawriyya was written by Taqī al-Dīn in 1559 and addressed mechanical-automatic clocks. This work is considered the first written work on mechanical-automatic clocks in the Islamic and Ottoman world. In this book, he accredits Alī Pasha as a contributor for allowing him to use and study his private library and collection of European mechanical clocks.al-Ṭuruq al-saniyya fī al-ālāt al-rūḥāniyya is a second book on mechanics by Taqī al-Dīn that emphasizes the geometrical-mechanical structure of clocks, which was a topic previously observed and studied by Banū Mūsā and Ismail al-Jazari (Abū al-ҁIzz al-Jazarī). Physics and optics Nawr ḥadīqat al-abṣar wa-nūr ḥaqīqat al-Anẓar was a work of Taqī al-Dīn that discussed physics and optics. This book discussed the structure of light, the relationship between light and color, as well as diffusion and global refraction. See also Inventions in the Muslim world Islamic astronomy Islamic science Notes Further reading Ben-Zaken, Avner. "The Revolving Planets and the Revolving Clocks: Circulating Mechanical Objects in the Mediterranean", History of Science, xlix (2010), pp. 125-148. Ben-Zaken, Avner. Cross-Cultural Scientific Exchanges in the Eastern Mediterranean 1560-1660 (Johns Hopkins University Press, 2010), pp. 8-47. Tekeli, Sevim. (2002). 16’ıncı yüzyılda Osmanlılarda saat ve Takiyüddin’in “mekanik saat konstrüksüyonuna dair en parlak yıldızlar = The clocks in Ottoman Empire in 16th century and Taqi al Din’s the brightest stars for the construction of the mechanical clocks. Second edition, Ankara: T. C. Kültür Bakanlıgi. Unat, Yavuz, "Time in The Sky of Istanbul, Taqî al Dîn al-Râsid's Observatory", Art and Culture Magazine, Time in Art, Winter 2004/Issue 11, pp. 86–103. External links (PDF version) 1526 births 1585 deaths Syrian scientists Syrian astronomers Syrian mathematicians Mathematicians from the Ottoman Empire Engineers from the Ottoman Empire Inventors from the Ottoman Empire 16th-century physicians from the Ottoman Empire People from Damascus 16th-century Arabic-language writers 16th-century writers from the Ottoman Empire 16th-century astronomers from the Ottoman Empire Scientific instrument makers Clockmakers from the Ottoman Empire 16th-century mathematicians Inventors of the medieval Islamic world Astronomical instrument makers Globe makers
Taqi ad-Din Muhammad ibn Ma'ruf
[ "Astronomy" ]
4,531
[ "Astronomical instrument makers", "Astronomical instruments" ]
982,557
https://en.wikipedia.org/wiki/Harry%20Emerson%20Fosdick
Harry Emerson Fosdick (May 24, 1878 – October 5, 1969) was an American pastor. Fosdick became a central figure in the fundamentalist–modernist controversy within American Protestantism in the 1920s and 1930s and was one of the most prominent liberal ministers of the early 20th century. Although a Baptist, he was called to serve as pastor, in New York City, at First Presbyterian Church in Manhattan's West Village, and then at the historic, inter-denominational Riverside Church in Morningside Heights, Manhattan. Career Born in Buffalo, New York, Fosdick graduated from Colgate University in 1900 and from Union Theological Seminary in 1904. While attending Colgate University he joined the Delta Upsilon fraternity. He was ordained a Baptist minister in 1903 at Madison Avenue Baptist Church at 31st Street, Manhattan. He was called as minister to First Baptist Church, Montclair, New Jersey, in 1904, serving until 1915. He supported US participation in the First World War (later describing himself as a "gullible fool" in doing so), and in 1917 volunteered as an Army chaplain, serving in France. In 1918, he was called to First Presbyterian Church, and on May 21, 1922, he delivered his famous sermon Shall the Fundamentalists Win?, in which he defended the modernist position. In that sermon he presented the Bible as a record of the unfolding of God's will, not as the literal "Word of God". He saw the history of Christianity as one of development, progress, and gradual change. Fundamentalists regarded this as rank apostasy, and the battle-lines were drawn. Fosdick's sermon prompted a response from the Rev. Clarence Edward Macartney of Arch Street Presbyterian Church in Philadelphia on July 13, 1922, with a sermon entitled "Shall Unbelief Win?". Like Fosdick's sermon, Macartney's sermon was published and sent to church leaders across America. "There are not a few," said Macartney, "who do not think of themselves as either 'Fundamentalists' or 'Modernists', but as Christians, striving amid the dust and the confused clamor of this life to hold the Christian faith and follow the Lord Jesus Christ, who will read this sermon with sorrow and pain." The national convention of the General Assembly of the old Presbyterian Church in the USA in 1923 charged his local presbytery in New York to conduct an investigation into Fosdick's views. A commission began an investigation, as required. His defense was conducted by a lay elder, John Foster Dulles (1888–1959, future Secretary of State under President Dwight D. Eisenhower in the 1950s), whose father was a well-known liberal Presbyterian seminary professor. Fosdick escaped probable censure at a formal trial by the 1924 General Assembly by resigning from the First Presbyterian Church (historic "Old First") pulpit in 1924. He was immediately called as pastor of a new type of Baptist church ministry at Park Avenue Baptist Church, whose most famous member was the industrialist, financier and philanthropist John D. Rockefeller Jr. Rockefeller then funded the famed ecumenical Riverside Church (later a member of the American Baptist Churches and United Church of Christ denominations) in Manhattan's northwestern Morningside Heights area near Columbia University, where Fosdick became pastor as soon as the doors opened in October 1930. This prompted a Time cover story on October 6, 1930 (pictured), in which Time said that Fosdick: Fosdick outspokenly opposed racism and injustice. Ruby Bates credited him with persuading her to testify for the defense in the 1933 retrial of the infamous and racially charged legal case of the Scottsboro Boys, which tried nine black youths before all-white juries for allegedly raping white women (Bates and her companion, Victoria Price) in Alabama. Fosdick was a guest preacher at Central Congregational Church in Providence, Rhode Island. Sermons and publications Fosdick's sermons won him wide recognition. His 1933 anti-war sermon, "The Unknown Soldier", inspired the British priest Dick Sheppard to write a letter that ultimately led to the founding of the Peace Pledge Union. His Riverside Sermons was printed in 1958, and he published numerous other books. His radio addresses were nationally broadcast by the BBC; he also wrote the hymn "God of Grace and God of Glory". Fosdick's book A Guide to Understanding the Bible traces the beliefs of the people who wrote the Bible, from the ancient beliefs of the Hebrews (which he regarded as practically pagan) to the faith and hopes of the New Testament writers. Fosdick was an advocate of theistic evolution. He defended the teaching of evolution in schools and rejected creationism. He was involved in a dispute with the creationist William Jennings Bryan. Fosdick reviewed the first edition of the book Alcoholics Anonymous: The Story of How More Than One Hundred Men Have Recovered from Alcoholism in 1939, giving it his approval. Members of Alcoholics Anonymous (AA) point to this review as significant in the development of the AA movement. Fosdick was an active member of the American Friends of the Middle East, a founder of the Committee for Justice and Peace in the Holy Land, and an active "anti-Zionist". He was a major influence on Martin Luther King Jr. who said that Fosdick was "the greatest preacher of this century." King drew on Fosdick's writings and sermons for some of his own sermons. Works The Second Mile (1908) The Assurance of Immortality (1913) The Manhood of the Master (1913) The Meaning of Prayer (1915) The Meaning of Faith (1917) The Challenge of the Present Crisis (1918) The Meaning of Service (1920) Shall the Fundamentalists Win? (1921) (Reprinted by CrossReach Publications, 2015) Christianity and Progress (1922) Evolution and Mr. Bryan (1922) Twelve Tests of Character (1923) Science and Religion. Evolution and the Bible (1924) The Modern Use of the Bible (1924) Adventurous Religion, and Other Essays (1926) A Pilgrimage to Palestine (1927) What Religion Means to Me (1929) As I See Religion (1932) The Hope of the World; Twenty-Five Sermons on Christianity Today (1933) The Secret of Victorious Living (1934) The Power to See it Through (1935) Successful Christian Living (1937) A Guide to Understanding the Bible: The Development of Ideas Within the Old and New Testaments (1938) Living Under Tension; Sermons on Christianity Today (1941) On Being a Real Person (1943) A Great Time to be Alive; Sermons on Christianity in Wartime (1944) On Being Fit to Live With; Sermons on Post-War Christianity (1946) The Man from Nazareth, as His Contemporaries Saw Him (1949) The Meaning of Prayer (1950) Rufus Jones Speaks to Our Time; An Anthology (1951) Great Voices of the Reformation (1952) A Faith for Tough Times (1952) Sunday Evening Sermons; Fifteen Selected Addresses Delivered before the noted Chicago Sunday Evening Club with Alton Meyers Meyers (1952) What is Vital in Religion; Sermons on Contemporary Christian Problems (1955) Martin Luther (1956) The Living of These Days; An Autobiography (1956) A Book of Public Prayers (1959) Jesus of Nazareth (1959) Dear Mr. Brown (1961) The Life of Saint Paul (1962) The Meaning of Being a Christian (1964) The Secret of Victorious Living (1966) Harry Emerson Fosdick's Art of Preaching; An Anthology (1971) Works with a contribution by Fosdick Seeing the Invisible by Harold Cooke (Introduction by Harry Emerson Fosdick) (1932) You and Yourself by Albert George Butzer (Introduction by Harry Emerson Fosdick) (1933) The Complete Sayings of Jesus; The King James Version of Christ's Own Words. by Arthur Hinds (Introduction by Harry Emerson Fosdick) (1942) A Rauschenbusch reader, the Kingdom of God and the social Gospel Fosdick contributed a chapter (1957) Riverside Sermons (1958) Extended family Fosdick's brother, Raymond Fosdick, was essentially in charge of philanthropy for John D. Rockefeller Jr., running the Rockefeller Foundation for three decades, from 1921. Rockefeller funded the nationwide distribution of Shall the Fundamentalists Win?, although with a more cautious title, The New Knowledge and the Christian Faith. This direct-mail project was designed by Ivy Lee, who had worked since 1914 as an independent contractor in public relations for the Rockefellers. Fosdick's daughter, Dorothy Fosdick, was foreign policy adviser to Henry M. ("Scoop") Jackson, a United States Senator from Washington state. She also authored a number of books. He was the nephew of Charles Austin Fosdick, a popular author of adventure books for boys, who wrote under the pen name Harry Castlemon. See also List of covers of Time magazine (1920s) References External links A Guide to Understanding the Bible text online Encyclopædia Britannica article Colgate University alumni 1878 births 1969 deaths 20th-century Baptist ministers from the United States American Christian theologians American anti-Zionists Baptist writers American critics of creationism Theistic evolutionists Baptists from New York (state) Religious leaders from Buffalo, New York Delta Upsilon members
Harry Emerson Fosdick
[ "Biology" ]
1,912
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
982,571
https://en.wikipedia.org/wiki/History%20of%20video%20game%20consoles
The history of video game consoles, both home and handheld, began in the 1970s. The first console that played games on a television set was the 1972 Magnavox Odyssey, first conceived by Ralph H. Baer in 1966. Handheld consoles originated from electro-mechanical games that used mechanical controls and light-emitting diodes (LED) as visual indicators. Handheld electronic games had replaced the mechanical controls with electronic and digital components, and with the introduction of Liquid-crystal display (LCD) to create video-like screens with programmable pixels, systems like the Microvision and the Game & Watch became the first handheld video game consoles. Since then, home game consoles have progressed through technology cycles typically referred to as generations. Each generation has lasted approximately five years, during which the major console manufacturers have released console with broadly similar specifications. Handheld consoles have seen similar advances, and are usually grouped into the same generations as home consoles. While early generations were led by manufacturers like Atari and Sega, the modern home console industry is dominated by three companies: Nintendo, Sony, and Microsoft. The handheld market has waned since the introduction of mobile gaming in the late 2000s, and today, the only major manufacturer in handheld gaming is Nintendo. Origins Home consoles The first video games were created on mainframe computers in the 1950s, typically with text-only displays or computer printouts, and limited to simple games like Tic Tac Toe or Nim. Eventually displays with rudimentary vector displays for graphics were available, leading to titles like Spacewar! in 1962. Spacewar! directly influenced Nolan Bushnell and Ted Dabney to create Computer Space in 1971, the first recognized arcade game. Separately, while at Sanders Associates in 1966, Ralph H. Baer conceived of the idea of an electronic device that could be connected to a standard television to play games. With Sanders' permission, he created the prototype "Brown Box" which was able to play a limited number of games, including a version of table tennis and a simple light gun game. Sanders patented the unit and licensed the patents to Magnavox, where it was manufactured as the first home video game console, the Magnavox Odyssey, in 1972. Bushnell, after seeing the Odyssey and its table tennis game, believed he could make something better. He and Dabney formed Atari, Inc., and with Allan Alcorn, created their second arcade game, Pong. Pong first released in 1972 and was more successful than Computer Space. Atari released a Pong home console through Sears in 1975. Handheld consoles The origins of handheld game consoles are found in handheld and tabletop electronic game devices of the 1970s and early 1980s. These electronic devices can only play built-in games, they fit in the palm of the hand or on a tabletop, and they may make use of a variety of video display technologies such as LED, VFD, or LCD. These games derived from the emerging optoelectronic-display-driven calculator market of the early 1970s. The first such handheld electronic game was released by Mattel in 1977, where Michael Katz, Mattel's new product category marketing director, told the engineers in the electronics group to design a game the size of a calculator, using LED technology." This effort led to the 1977 games Auto Race and Football. The two games were so successful that according to Katz, "these simple electronic handheld games turned into a '$400 million category.'" Another Ralph Baer invention, Simon, published by Milton Bradley in 1978, followed, which further popularized such electronic games and remained an enduring property by Milton Bradley (later Hasbro) that brought a number of copycats to the market. Soon, other manufacturers including Coleco, Parker Brothers, Entex, and Bandai began following up with their own tabletop and handheld electronic games. The transition from handheld "electronic" games to handheld "video" games came with the introduction of LCD screens. These screens gave handheld games the flexibility to play a wide range of games. Milton Bradley's Microvision, released in 1979, used a 16x16 pixel LCD screen and was the first handheld to use interchangeable game cartridges. Nintendo's line of Game & Watch titles, first introduced in 1980, was designed by Gunpei Yokoi, who was inspired when he saw a man passing time on a train by playing with an LCD calculator. Taking advantage of the technology used in the credit-card-sized calculators that had appeared on the market, Yokoi designed the series of LCD-based games to include a digital time display in the corner of the screen, so that they could double as a watch. While the Game & Watch series were considered handheld electronic games rather than handheld video game consoles, their success led Nintendo, through Yokoi's design lead, to produce the Game Boy in 1989. Console generations Like most consumer electronics, home video game consoles are developed based on improving the features offered by an earlier product with advances made by newer technology. For video game consoles, these improvements typically occur every five years, following a Moore's law progression where a rough aggregate measure of processing power doubles every 18 months or increases ten-fold after five years. This cyclic market has resulted in an industry-wide adoption of the razorblade model in selling consoles at minimal profit margin while making revenue from the sale of games produced for that console, and then transitioning users to the next console model at the fifth year as the successor console enters the market. This approach incorporates planned obsolescence into the products to continue to bring consumers towards purchasing the newer models. Because of the industry dynamics, many console manufacturers release their new consoles in roughly the same time period, with their consoles typically offering similar processing power and capabilities as their competitors. This systematic market has created the nature of console generations, categorizing the primary consoles into these segmented time periods that represent consoles with similar capabilities and which shared the same competitive space. Like consoles, these generations typically start five years after its prior one, though may have long tails as popular consoles remain viable well beyond five years. The use of the generation label came after the start of the 21st century as console technology started to mature, with the terminology applied retroactively to earlier consoles. However, no exact definition and delineation of console generations was consistently developed in the industry or academic literature since that point. Some schemes have been based on direct market data (including a seminal work published in an IEEE journal in 2002), while others are based on technology shifts. Wikipedia itself has been noted for creating its own version of console generation definitions that differ from other academic sources; the definitions from Wikipedia has been adopted by other sources but without having any true rationale behind it. The discrepancies between how consoles are grouped into generations and how these generations are named have caused confusion when trying to compare shifts in the video game marketplace compared to other consumer markets. Kemerer et al. (2017) provide a comparative analysis of these different generations through systems released up to 2010 as shown below. Timeline For purposes of organization, the generations described here and subsequent pages maintain the Wikipedia breakdown of generation, generally breaking consoles apart by technology features whenever possible and with other consoles released in that same period incorporated within that same generation, and starting with the Odyssey and Pong-style home consoles as the first generation, an approach that has generally been adopted and extended by video game journalism. In this approach the generation "starts" with the release of the first console considered to have those features, and considered to end with the known last discontinuation of a console in that generation. For example, the third generation is considered to end in 2003 with the formal discontinuation of the Nintendo Entertainment System that year. This can create years with overlaps between multiple generations, as shown. This approach uses the concepts of "bits", or the size of individual word length handled by the processors on the console, for the earlier console generations. Longer word lengths generally led to improved gameplay concepts, graphics, and audio capabilities than shorter ones. The use of bits to market consoles to consumers started with the TurboGrafx 16, a console that used an 8-bit central processing unit similar to the Nintendo Entertainment System (NES), but included a 16-bit graphical processing unit. NEC, the console's manufacturer, took to market the console as a "16-bit" system over the NES' "8-bit" to establish it as a superior system. Other advertisers followed suit, creating a period known as the "bit wars" that lasted through the fifth generation, where console manufactures tried to outsell each other simply on the bit-count of their system. Aside from some "128 Bit" advertising slogans at the beginning of the sixth generation, marketing with bits largely stopped after the fifth generation. Though the bit terminology was no longer used in newer generations, the use of bit-count helped to establish the idea of console generations, and the earlier generations gained alternate names based on the dominant bit-count of the major systems of that era, such as the third generation being the 8-bit era or generation. Later console generations are based on groupings of release dates rather than common hardware as base hardware configurations between consoles have greatly diverged, generally following trends in generation definition given by video game and mainstream journalism. Handheld consoles and other gaming systems and innovations are frequently grouped within the release years associated with the home console generations; for example the growth of digital distribution is associated with the seventh generation. Console history timeline by generation The development of video game consoles primarily follows the history of video gaming in the North American and Japanese markets. Few other markets saw any significant console development on their own, such as in Europe where personal computers tended to be favored alongside imports of video game consoles. The video game clone in less-developed markets like China and Russia were not considered here. The following table provides an overview of the major hardware technical specifications of the consoles of each major generations by central processor unit (CPU), graphics processor unit (GPU), memory, game media, and other features. While there is no similar distinction of generations for handheld consoles, they are included in the sections below based on which home console generation they were released. First generation (1972–1983) The first generation of home consoles were generally limited to dedicated consoles with just one or two games pre-built into the console hardware, with a limited means to alter gameplay factors. In the case of the Odyssey, while it did ship with "game cards", these did not have any programmed games on them but instead acted as jumpers to alter the existing circuitry pathway, and did not extend the capabilities of the console. Unlike most other future console generations, the first generation of consoles were typically built in limited runs rather than as an ongoing product line. The first home console was the Magnavox Odyssey in September 1972 based on Baer's "Brown Box" design. Originally built from discrete transistors, Magnavox transitioned to integrated circuit chips that were inexpensive, and developed a new line of consoles in the Odyssey series from 1975 to 1977. At the same time, Atari had successfully launched Pong as an arcade game in 1972, and began work to make a home console version in late 1974, which they eventually partnered with Sears to the new home Pong console by the 1975 Christmas season. Pong had several technology advantages over the Odyssey, including an internal sound chip and the ability to track score. Coleco developed the first Telstar console in 1976. With Magnavox, Atari and Coleco all vying in the console space by 1976 and further cost reductions in key processing chips from General Instruments, numerous third-party manufacturers entered the console market by 1977 with ball-and-paddle games. This led to market saturation by 1977, and the industry's first market crash. Atari and Coleco attempted to make dedicated consoles with wholly new games to remain competitive, including Atari's Video Pinball series and Coleco's Telstar Arcade, but by this point, the first steps of the market's transition to the second generation of consoles had begun, making these units obsolete near release. The Japanese market for gaming consoles followed a similar path at this point. Nintendo had already been a business partner with Magnovox by 1971 and helped to design the early light guns for the console. Dedicated home game consoles in Japan appeared in 1975 with Epoch Co.'s TV Tennis Electrotennis. As in the United States, numerous rival products of these dedicated consoles began to appear, most made by the large television manufacturers like Toshiba and Sharp, and these games would be called TV geemu or terebi geemu (TV game) as the designation for "video games" in Japan. Nintendo became a major player when Mitsubishi, having lost their manufacturer Systek due to bankruptcy, turned to the company to help continue to build their Color TV-Game line, which went on to sell about 3 million units across four different units between 1977 and 1983. Second generation (1976–1992) The second generation of home consoles was distinguished by the introduction of the game cartridge, where the game's code is stored in read-only memory (ROM) within the cartridge. When the cartridge is slotted into the console, the electrical connections allow the main console's processors to read the game's code from the ROM. While ROM cartridges had been used in other computer applications prior, the ROM game cartridge was first implemented in the Fairchild Video Entertainment System (VES) in November 1976. Additional consoles during this generation, all which used cartridge-based systems, included the Atari 2600 (known as the Atari Video Computer System (VCS) at launch), the Magnavox Odyssey 2, Mattel Electronics' Intellivision, and the ColecoVision. In addition to consoles, newer processor technology allowed games to support up to 8 colors and up to 3-channel audio effects. With the introduction of cartridge-based consoles came the need to develop a wide array of games for them. Atari was one of the forefronts in development for its Atari 2600. Atari marketed the console across multiple regions including into Japan, and retained control of all development aspects of the games. Game developments coincided with the Golden age of arcade video games that started in 1978–1979 with the releases of Space Invaders and Asteroids, and home versions of these arcade games were ideal targets. The Atari 2600 version of Space Invaders, released in 1980, was considered the killer app for home video game consoles, helping to quadruple the console's sales that year. Similarly, Coleco had beaten Atari to a key licensing deal with Nintendo to bring Donkey Kong as a pack-in game for the Colecovision, helping to drive its sales. At the same time, Atari has been acquired by Warner Communications, and internal policies led to the departure of four key programmers David Crane, Larry Kaplan, Alan Miller, and Bob Whitehead, who went and formed Activision. Activision proceeded to develop their own Atari 2600 games as well as games for other systems. Atari attempted legal action to stop this practice but ended up settling out of court, with Activision agreeing to pay royalties but otherwise able to continue game development, making Activision the first third-party game developer. Activision quickly found success and were able to generate in revenue from about in startup funds within 18 months. Numerous other companies saw Activision's success and jumped into game development to try to make fast money on the rapidly expanding North America video game market. This led to a loss of publishing control and dilution of the game market by the early 1980s. Additionally, in following on the success of Space Invaders, Atari and other companies had remained eager for licensed video game possibilities. Atari had banked heavily on commercial sales of E.T. the Extra-Terrestrial in 1982, but it was rushed to market and poorly-received, and failed to make Atari's sales estimates. Along with competition from inexpensive home computers, the North American home console market crashed in 1983. For the most part, the 1983 crash signaled the end of this generation as Nintendo's introduction of the Famicom the same year brought the start of the third generation. When Nintendo brought the Famicom to North America under the name "Nintendo Entertainment System", it helped to revitalize the industry, and Atari, now owned by Jack Tramiel, pushed on sales of the previously successful Atari 2600 under new branding to keep the company afloat for many more years while he transitioned the company more towards the personal computer market. The Atari 2600 stayed in production until 1992, marking the end of the second generation. Handhelds of the second generation Handheld electronic games had already been introduced on the market, such as Mattel Auto Race in 1977 and Simon in 1978. While not considered video games as lacking the typical video screen element, instead using LED lights as game indicators, they still established a market for portable video games. The first handheld game console emerged during the second home console generation, using simple LC displays. Early attempts at cartridge-based handheld systems included the Microvision by Milton-Bradley and the Epoch Game Pocket Computer, but neither gained significant traction. Nintendo, on the other hand, introduced its line of Game & Watch portable games, each with a single dedicated game, as its first venture into the video game market. First introduced in 1980, the Game & Watch series ran for over a decade and sold more than 40 million units. Third generation (1983–2003) Frequently called the "8-bit generation", the third generation's consoles used 8-bit processors, five audio channels, and more advanced graphics capability including sprites and tiles instead of block-based graphics of the second generation. Further, the third console saw the market dominance shift from the United States to Japan as a result of the 1983 crash. Both the Sega SG-1000 and the Nintendo Famicom launched near simultaneously in Japan in 1983. The Famicom, after some initial technical recalls, soon gained traction and became the best selling console in Japan by the end of 1984. By that point Nintendo wanted to bring the console to North America but recognized the faults that the video game crash had caused. It took several steps to redesign the console to make it look less like a game console and rebranded it as the "Nintendo Entertainment System" (NES) for North America to avoid the "video game" label stigma. The company also wanted to avoid the loss of publishing control that had occurred both in North America as well as in Asia after the Famicom's release, and created a lockout system that required all game cartridges to be manufactured by Nintendo to include a special chip. If this chip was not present, the console would fail to play the game. This further gave Nintendo direct control on the titles published for the system, rejecting those it felt were too mature. The NES launched in North America in 1985, and helped to revitalize the video game market there. Sega attempted to compete with the NES with its own Master System, released later in 1986 in both the US and Japan, but did not gain traction to compete. Similarly, Atari's attempts to compete with the NES via the Atari 7800 in 1987 failed to knock the NES from its dominant position. The NES remained in production until 2003, when it was discontinued along with its successor, the Super Nintendo Entertainment System. Fourth generation (1987–2004) The fourth generation of consoles, also known as the "16-bit generation", further advanced core console technology with 16-bit processors, improving the available graphics and audio capabilities of games. NEC's TurboGrafx-16 (or PC Engine as released in Japan), first released in 1987, is considered the first fourth generation console even though it still had an 8-bit CPU. The console's 16-bit graphics processor gave it capabilities comparable to the other fourth generation systems, and NEC's marketing had pushed the console being an advancement over the NES as a "16-bit" system. Both Sega and Nintendo entered the fourth generation with true 16-bit systems in the 1988 Sega Genesis (Mega Drive in Japan) and the 1990 Super Nintendo Entertainment System (SNES, Super Famicom in Japan). SNK also entered the competition with a modified version of their Neo Geo MVS arcade system into the Neo Geo, released in 1990, which attempted to bridge the gap between arcade and home console systems with the shared use of common game cartridges and memory cards. This generation was notable for the so-called "console wars" between Nintendo and Sega primarily in North America. Sega, to try to challenge Nintendo's dominant position, created the mascot character Sonic the Hedgehog, who exhibited cool personality to appeal to the Western youth in contrast to Nintendo's Mario, and bundled the Genesis with the game of the same name. The strategy succeeded with Sega becoming the dominant player in North America until the mid-1990s. During this generation, the technology costs of using optical discs in the form of CD-ROMs has dropped sufficiently to make them desirable to be used for shipping computer software, including for video games for personal computers. CD-ROMs offered more storage space than game cartridges and could allow for full-motion video and other detailed audio-video works to be used in games. Console manufacturers adapted by created hardware add-ons to their consoles that could read and play CD-ROMs, including NEC's TurboGrafx-CD add-on (as well as the integrated TurboDuo system) in 1988, and the Sega CD add-on for the Genesis in 1991, and the Neo Geo CD in 1994. Costs of these add-ons were generally high, nearing the same price as the console itself, and with the introduction of disc-based consoles in the fifth generation starting in 1993, these fell by the wayside. Nintendo had initially worked with Sony to develop a similar add-on for the SNES, the Super NES CD-ROM, but just before its introduction, business relationships between Nintendo and Sony broke down, and Sony would take its idea on to develop the fifth generation PlayStation. Additionally, Philips attempted to enter the market with a dedicated CD-ROM format, the CD-i, also released in 1990, that included other uses for the CD-ROM media beyond video games but the console never gained traction. The fourth generation had a long tail that overlapped with the fifth generation, with the SNES's discontinuation in 2003 marking the end of the generation. To keep their console competitive with the new fifth generation ones, Nintendo took to the use of coprocessors manufactured into the game cartridges to enhance the capabilities of the SNES. This included the Super FX chip, which was first used in the game Star Fox in 1993, generally considered one of the first games to use real-time polygon-based 3D rendering on consoles. Handhelds of the fourth generation Nintendo brought its experience from the Game & Watch series to develop the Game Boy system in 1989, with subsequent iterations through the years. The unit included a LCD screen that supported a 4-shade monochrome pixel display, the use of a cartridge-based system, and the means to link up two units to play head-to-head games. One of the early packages included Tetris bundled with the unit, which became the Game Boy's best-selling game and led the unit to dominate handheld sales at the time. The Game Boy also introduced the Kirby franchise worldwide, which became a staple of Nintendo's handheld consoles. The Atari Lynx was also introduced in 1989 and included a color-LED screen, but its small game library and low battery life failed to make it competitive with the Game Boy. Both Sega and NEC also attempted to compete with the Game Boy with the Game Gear and the TurboExpress, respectively, both released in 1990. Each were attempts to bring the respective home console games to handheld systems, but struggled against the staying power of the Game Boy. Fifth generation (1993–2006) During this time home computers gained greater prominence as a way of playing video games. The video game console industry nonetheless continued to thrive alongside home computers, due to the advantages of much lower prices, easier portability, circuitry specifically dedicated towards video games, the ability to be played on a television set (which PCs of the time could not do in most cases), and intensive first party software support from manufacturers who were essentially banking their entire future on their consoles. Besides the shift to 32-bit processors, the fifth generation of consoles also saw most companies excluding Nintendo shift to dedicated optical media formats instead of game cartridges, given their lower cost of production and higher storage capacity. Initial consoles of the fifth generation attempted to capitalize on the potential power of CD-ROMs, which included the 3DO and the Atari Jaguar in 1993. However, early in the cycle, these systems were far more expensive than existing fourth-generation models and has much smaller game libraries. Further, Nintendo's use of co-processors in late SNES games further kept the SNES as one of the best selling systems over new fifth generation ones. Two of the key consoles of the fifth generation were introduced in 1995: the Sega Saturn, and the Sony PlayStation, both which challenged the SNES' ongoing dominance. While the Saturn sold well, it did have a number of technical flaws, but established Sega for a number of key game series going forward. The PlayStation, in addition to using optical media, also introduced the use of memory cards as to save the state of a game. Though memory cards had been used by Neo Geo to allow players to transfer game information between home and arcade systems, the PlayStation's approach allowed games to have much longer gameplay and narrative elements, leading to highly-successful role-playing games like Final Fantasy VII. By 1996, the PlayStation became the best-selling console over the GBA. Nintendo released their next console, the Nintendo 64 in late 1996. Unlike other fifth generation units, it still used game cartridges, as Nintendo believed the load-time advantages of cartridges over CD-ROMs was still essential, as well as their ability to continue to use lockout mechanisms to protect copyrights. The system also included support for memory cards as well, and Nintendo developed a strong library of first-party titles for the game, including Wave Race 64 and The Legend of Zelda: Ocarina of Time that helped to drive its sales. While the Nintendo 64 did not match the PlayStation's sales, it kept Nintendo a key competitor in the home console market alongside Sony and Sega. As with the transition from the fourth to fifth generation, the fifth generation has a long overlap with the sixth console generation, with the PlayStation remaining in production until 2006. Handhelds of the fifth generation Nintendo released the Virtual Boy, an early attempt at virtual reality, in 1995. The unit required the player to play a game through a stereoscopic viewerfinder, which was awkward and difficult, and did not lend well to portable gaming. Nintendo instead returned to focus on incremental improvements to the Game Boy, including the Game Boy Pocket and the Game Boy Color. Sega also released the Genesis Nomad, a handheld unit that played Sega Genesis games, in 1995 in North America only. The unit had been developed through Sega of America with little oversight from Sega's main headquarters, and as Sega moved forward, the company as a whole decided to put more focus on the Sega Saturn to stay competitive and drop support for all other ongoing systems, including the Nomad. Despite Nintendo's domination of handheld console market, some competing consoles such as Neo Geo Pocket, WonderSwan, Neo Geo Pocket Color, and WonderSwan Color appeared in the late 1990s and discontinued several years later after their appearance in handheld console market. Sixth generation (1998–2013) By the sixth generation, console technology began to catch up to performance of personal computers of the time, and the use of bits as their selling point fell by the wayside. The console manufactures focused on the individual strengths of their game libraries as marketing instead. The consoles of the sixth generation saw further adoption of optical media, expanding into the DVD format for even greater data storage capacity, additional internal storage solutions to function as memory cards, as well as adding support either directly or through add-ons to connect to the Internet for online gameplay. Consoles began to move towards a convergence of features of other electronic living room devices and moving away from single-feature systems. By this point, there were only three major players in the market: Sega, Sony, and Nintendo. Sega got an early lead with the Dreamcast first released in Japan in 1998. It was the first home console to include a modem to allow players to connect to the Sega network and play online games. However, Sega found several technical issues that had to be resolved before its Western launch in 1999. Though its Western release was more successful than in Japan, the console was soon outperformed by Sony's PlayStation 2 released in 2000. The PlayStation 2 was the first console to add support for DVD playback in addition to CD-ROM, as well as maintaining backward compatibility with games from the PlayStation library, which helped to draw consumers that remained on the long-tail of the PlayStation. While other consoles of the sixth generation had not anticipated this step, the PlayStation 2's introduction of backwards compatibility became a major design consideration of future generations. Along with a strong game library, the PlayStation 2 went on to sell 155 million units before it was discontinued in 2013, and , remains the best selling home console of all time. Unable to compete with Sony, Sega discontinued the Dreamcast in 2001 and left the hardware market, instead focusing on its software properties. Nintendo's entry in the sixth generation was the GameCube in 2001, its first system to use optical discs based on the miniDVD format. A special Game Boy Player attachment allowed the GameCube to use any of the Game Boy cartridges as well, and adapters were available to allow the console to connect to the Internet via broadband or modem. At this point Microsoft also entered the console market with its first Xbox system, released in 2001. Microsoft considered the PlayStation 2's success as a threat to the personal computer in the living room space, and had developed the Xbox to compete. As such, the Xbox was designed based more on Microsoft's experience from personal computers, using an operating system built out from its Microsoft Windows and DirectX features, utilizing a hard disk for save game store, built-in Ethernet functionality, and created the first console online service, Xbox Live to support multiplayer games. While the original Xbox had modest sales compared to the PlayStation 2 and was not profitable for the company, Microsoft considered the Xbox to have successfully demonstrated their abilities to participate in the console market. Handhelds of the sixth generation Nintendo continued to refine its Game Boy design with the Game Boy Advance in 2001, including its Game Boy Advance SP in 2003 and Game Boy Micro in 2005, all with the ability to link to the GameCube to extend the functionality of certain games. Also introduced were the Neo Geo Pocket Color in 1998 and Bandai's WonderSwan Color, launched in Japan in 1999. South Korean company Game Park introduced its GP32 handheld in 2001, and with it came the dawn of open source handheld consoles. During the sixth generation, a new type of market for gaming came from the growing mobile phone arena, where advanced smart phones and other portable devices could be loaded with games. Nokia's N-Gage was one of the first devices marketed as a mobile phone and game system, first released in 2003 and later redesigned as the N-Gage QD. Seventh generation (2005–2017) Video game consoles had become an important part of the global IT infrastructure by the mid-2000s. It was estimated that video game consoles represented 25% of the world's general-purpose computational power in the year 2007. By the seventh generation, Sony, Microsoft, and Nintendo had all developed consoles designed to interface with the Internet, adding networking support for either wired and wireless connections, online services to support multiplayer games, digital storefronts for digital purchases of games, and both internal storage and support for external storage on the console for these games. With the start and transition to the HD era, these consoles also added support for digital television resolutions through HDMI interfaces, but as the generation occurred in the midst of the High-definition optical disc format war between Blu-ray and HD-DVD, a standard for high-definition playback was yet to be fixed. A further innovation came by the use of motion controllers, either built into the console or offered as an add-on afterwards. Consoles in this generation started using custom CPUs based on the PowerPC instruction set, and were increasingly sharing similarities with the personal computer in game development, although with challenges due to the more complex nature of porting between the differences in architectures. Microsoft entered the seventh generation first with the Xbox 360 in 2005. The Xbox 360 saw several hardware revisions over its lifetime which became a standard practice for Microsoft going forward; these revisions offered different features such as a larger internal hard drive or a fast processor at a higher price point. As shipped, the Xbox 360 supported DVD discs and Microsoft had opted to support the HD-DVD format with an add-on for playback of HD-DVD films. However, this format ended up as deprecated compared to Blu-ray. The Xbox 360 was backward compatible with about half of the original Xbox library. Through its lifetime, the Xbox 360 was troubled by a consistent hardware fault known as "the Red Ring of Death" (RROD), and Microsoft spent over $1 billion correcting the problem. Sony's PlayStation 3 was released in 2006. The PlayStation 3 represented a shift of the internal hardware from Sony's custom Emotion Engine to a PowerPC-based system. Initial PlayStation 3 units shipped with a special Emotion Engine daughterboard that allowed for backwards compatibility of PlayStation 2 games, but later revisions of the unit removed this, leaving only software-based emulation for PlayStation games available. Sony banked on the Blu-ray format, which was included from the start, and partially helped spur the adoption of Blu-Ray as the favoured format for high-definition optical media. With the PlayStation 3, Sony introduced the PlayStation Network for its online services and storefront. While the system would initially have a slow start in the market in part, due to its high price, complex game development environment and initial lack of quality games, the PlayStation 3 eventually became more well received over time following gradual price cuts, improved marketing campaigns, new hardware revisions particularly the Slim models, and key critically acclaimed exclusives. Nintendo introduced the Wii in 2006 around the same time as the PlayStation 3. Nintendo lacked the same manufacturing capabilities and relationships with major hardware supplies as Sony and Microsoft, and to compete, diverged on a feature-for-feature approach and instead developed the Wii around the novel use of motion controls in the Wii Remote. This "blue ocean strategy", releasing a product where there was no competition, was considered part of the unit's success, and which drove Microsoft and Sony to develop their own motion control accessors to compete. Nintendo provided various online services that the Wii could connect to, including the Virtual Console where players could purchase emulated games from Nintendo's past consoles as well as games for the Wii. The Wii used regular sized DVDs for its game medium but also directly supported GameCube discs. The Wii was generally considered a surprising success that many developers had initially overlooked. The seventh generation concluded with the discontinuation of the PlayStation 3 in 2017. Handhelds of the seventh generation Nintendo introduced the new Nintendo DS system in 2004, a game cartridge-based unit that support two screens including one being touch-sensitive. The DS also included built-in wireless connectivity to the Internet to purchase new DS games or Virtual Console titles, as well as the ability to connect to each other or to a Wii system in an ad hoc manner for certain multiplayer titles. Sony entered the handheld market in 2004 with the PlayStation Portable (PSP), with a reduced design based on the PlayStation 3. Like the DS, the PSP also supported wireless connectivity to the Internet to download new games, and ad hoc connectivity to other PSP or to a PlayStation 3. The PSP used a new format called Universal Media Disc (UMD) for game and other media. Nokia revived its N-Gage platform in the form of a service for selected S60 devices. This new service launched on April 3, 2008. Other less-popular handheld systems released during this generation include the Gizmondo (launched on March 19, 2005, and discontinued in February 2006) and the GP2X (launched on November 10, 2005, and discontinued in August 2008). The GP2X Wiz, Pandora, and Gizmondo 2 were scheduled for release in 2009. Another aspect of the seventh generation was the beginning of direct competition between dedicated handheld video game devices, and increasingly powerful PDA/cell phone devices such as the iPhone and iPod Touch, and the latter being aggressively marketed for gaming purposes. Simple games such as Tetris and Solitaire had existed for PDA devices since their introduction, but by 2009 PDAs and phones had grown sufficiently powerful to where complex graphical games could be implemented, with the advantage of distribution over wireless broadband. Apple had launched its App Store in 2008 that allowed developers to publish and sell games for iPhones and similar devices, beginning the rise of mobile gaming. Other seventh generation hardware Based on the success of the Wii Remote controller, both Microsoft and Sony released similar motion detection controllers for their consoles. Microsoft introduced the Kinect motion controller device for the Xbox 360, which served as both a camera, microphone, and motion sensor for numerous games. Sony released the PlayStation Move, a system consisting of a camera and lit handheld controllers, which worked with its PlayStation 3. Eighth generation (2012–2020) Aside from the usual hardware enhancements, consoles of the eighth generation focus on further integration with other media and increased connectivity. Consoles at this point had also standardized on CPUs using the x86 instruction set, the same as in personal computers, and there was a convergence of the individual hardware components between consoles and personal computers, making the porting of games between these systems much easier. Later hardware improvements pushed for higher frame rates at up to 4K resolutions. Digital distribution increased in popularity, while the addition and improvements to remote play capabilities became standard, and second screen experiences via companion apps added more interactivity to games. The Wii U, introduced in 2012, was considered by Nintendo to be a successor to the Wii but geared to more serious players. The console supported backward compatibility with the Wii, including its motion controls, and introduced the Wii U GamePad, a tablet/controller hybrid that acted as a second screen. Nintendo further refined its network offerings to develop the Nintendo Network service to combine storefront and online connectivity services. The Wii U did not sell as well as Nintendo had planned, as they found people mistook the GamePad to be a tablet they could take with them away from the console, and the console struggled to draw the third-party developers as the Wii had. Both the PlayStation 4 and Xbox One came out in 2013. Both were similar improvements over the previous generation's respective consoles, providing more computational power to support up to 60 frames per second at 1080p resolutions for some games. Each unit also saw a similar set of revisions and repackaging to develop high- and low-end cost versions. In the case of the Xbox One, the console's initial launch had included the Kinect device but this became highly controversial in terms of potential privacy violations and lack of developer support, and by its mid-generation refresh, the Kinect had been dropped and discontinued as a game device. Both consoles eventually released upgraded hardware during their mid-cycle refresh, with Sony releasing the PlayStation 4 Pro and Microsoft releasing the Xbox One X, which allowed for higher frame rates and up to 4K resolution, in addition to Slim models, marking a departure from previous generations, while adding considerable longevity to this generation cycle. Later in the eighth generation, Nintendo released the Nintendo Switch in 2017. The Switch is considered the first hybrid game console. It uses a special CPU/GPU combination that can run at different clock frequencies depending on how it is used. It can be placed into a special docking unit that is hooked to a television and a permanent power supply, allowing faster clock frequencies to be used to be played at higher resolutions and frame rates, and thus more comparable to a home console. Alternatively, it can be removed and used either with the attached Joy-Con controllers as a handheld unit, or can be even played as a tablet-like system via its touchscreen. In these modes, the CPU/GPU run at lower clock speeds to conserve battery power, and the graphics are not as robust as in the docked version. A larger suite of online services was removed through the Nintendo Switch Online subscription, including several free NES and SNES titles, replacing the past Virtual Console system. The Switch was designed to address many of the hardware and marketing faults around the Wii U's launch, and has become one of the company's fastest-selling consoles after the Wii. Game systems in the eighth generation also faced increasing competition from mobile device platforms such as Apple's iOS and Google's Android operating systems. Smartphone ownership was estimated to reach roughly a quarter of the world's population by the end of 2014. The proliferation of low-cost games for these devices, such as Angry Birds with over 2 billion downloads worldwide, presents a new challenge to classic video game systems. Microconsoles, cheaper stand-alone devices designed to play games from previously established platforms, also increased options for consumers. Many of these projects were spurred on by the use of new crowdfunding techniques through sites such as Kickstarter. Notable competitors include the GamePop, OUYA, GameStick Android-based systems, the PlayStation TV, the Nvidia Shield, the Apple TV and Steam Machines. Handhelds of the eighth generation The Nintendo 3DS released in 2011 expanded on the Nintendo DS design and added support for an autostereoscopic screen to project stereoscopic 3D effects without the use of 3D glasses. The console was otherwise remained backward compatible with all of the DS titles. Sony introduced its PlayStation Vita in 2011, a revised version of the PSP but eliminating the use of external media and focusing on digital acquisition of games, as well as incorporating a touchscreen. and was released in Europe and North America on February 22, 2012. As noted above, the Nintendo Switch is a hybrid console, capable of both being used as a home console in its docked mode and as a handheld. The Nintendo Switch Lite revision was released in 2019, which reduced some of the features of the system and its size, including eliminating the ability to dock the unit, making the Switch Lite primarily a handheld system, but otherwise compatible with most of the Switch's library of games. Other eighth generation hardware Virtual reality systems appeared during the eighth generation, with three main systems: the PlayStation VR headset that worked with PlayStation 4 hardware, the Oculus Rift and the HTC Vive which ran off a personal computer. Ninth generation (2020–present) Both Microsoft and Sony released successors to their home consoles in November 2020. Consoles in this generation also launched with lower-cost models lacking optical disc drives, targeting those who would prefer to purchase games exclusively through digital downloads. Both console families target 4K and 8K resolution televisions at high frame rates, support for real-time ray tracing rendering, 3D spatial audio, variable refresh rates, the use of high-performance solid-state drives (SSD) as internal high-speed memory to make delivering game content much faster than reading from optical disc or standard hard drives, which can eliminate loading times and support in-game streaming. With features that were commonly standard in PCs, and the move to higher performance APUs, consoles in the ninth generation now have capabilities comparable to high-end personal computers, often making cross-platform development easier and more widely available than previously, further converging and blurring the line between video game consoles and personal computers. Microsoft released the fourth generation of Xbox with the Xbox Series X and Series S on November 10, 2020. The Series X has a base performance target of 60 frames per second at 4K resolution to be four times as powerful as the Xbox One X. One of Microsoft's goals with both units was to assure backward compatibility with all games supported by the Xbox One, including those original Xbox and Xbox 360 titles that are backward compatible with the Xbox One, allowing the Xbox Series X and Series S to support four generations of games. Sony's PlayStation 5 was released on November 12, 2020, and also is a similar performance boost over the PlayStation 4. The PlayStation 5 uses a custom SSD solution with much higher input/output rates that are almost comparable to RAM chip speeds, significantly improving rendering and data streaming speeds. The chip architecture is comparable to the PlayStation 4, allowing backwards compatible with most of the PlayStation 4 library while select games will need chip timing tweaking to make them compatible. In terms of handhelds, Sony has announced no further plans for handhelds after discontinuing the Vita, while Nintendo continues to offer the Nintendo Switch and Switch Lite. The market here still continues to compete with the growing mobile gaming market, but developers have taken advantage of new opportunities in cross-platform play support, in part due to the popularity of Fortnite in 2018, to make games that are compatible on consoles, computers, and mobile devices. Cross platform is now used widely in various games. Cloud gaming also is seen as a potential replacement of handheld gaming. While earlier cloud gaming platforms have gone by the wayside, newer approaches including PlayStation Now, Xbox Cloud Gaming, Google's Stadia (discontinued in 2023) and Amazon Luna can deliver computer and console-quality gameplay to nearly any platform including mobile devices, limited by bandwidth quality. Console sales Below is a timeline of each generation with the top three home video consoles of each generation based on worldwide sales. Notes References History of video games Video game consoles
History of video game consoles
[ "Technology" ]
9,415
[ "History of video games", "History of computing" ]
982,578
https://en.wikipedia.org/wiki/Tin%20soldier
Tin soldiers are miniature toy soldiers that are very popular in the world of collecting. They can be bought finished or in a raw state to be hand-painted. They are generally made of pewter, tin, lead, other metals or plastic. Often very elaborate scale models of battle scenes, known as dioramas, are created for their display. Tin soldiers were originally almost two-dimensional figures, often called "little Eilerts" or "flats". They were the first toy soldiers to be mass-produced. Though largely superseded in popularity from the late 19th century by fully rounded three-dimensional lead figures, these flat tin soldiers continue to be produced. History The first mass-produced tin soldiers were made in Germany as a tribute to Frederick the Great during the 18th century. Johann Gottfried Hilpert (1748–1832) and his brother Johann Georg Hilpert (1733–1811) established an early assembly-line in 1775 for soldiers and other figures; female painters applied a single color on each figurine as it was passed around the workshop. Hilpert, living in Nuremberg was probably the first to create them as a mass-produced toy. The world's largest Tin Soldier is located in New Westminster, British Columbia, Canada. Casting "Real" tin soldiers, i.e., ones cast from an alloy of tin and lead, can also be home-made. Moulds are available for sale in some hobby shops. Earlier, the moulds were made of metal; currently, they are often made of hard rubber which can stand the temperature of the molten metal, around . Popular culture The best-known tin soldier in literature is the unnamed title character in Hans Christian Andersen's 1838 fairy tale The Steadfast Tin Soldier. It concerns a tin soldier who had only one leg because "he had been left to the last, and then there was not enough of the melted tin to finish him." He falls in love with a dancer made of paper and after much adventuring, including being swallowed by a fish, the two are consumed together by fire, leaving nothing but tin melted "in the shape of a little tin heart." Tin soldiers also play a role in "The Nutcracker Suite" as well as "Knight's Castle" by Edward Eager. See also Army men Militaria Model figure Toy soldier The Parade of the Tin Soldiers References Scale modeling Soldiers Militaria Toy figurines Metal toys German inventions
Tin soldier
[ "Physics" ]
499
[ "Scale modeling" ]
982,771
https://en.wikipedia.org/wiki/Double%20coset
In group theory, a field of mathematics, a double coset is a collection of group elements which are equivalent under the symmetries coming from two subgroups, generalizing the notion of a single coset. Definition Let be a group, and let and be subgroups. Let act on by left multiplication and let act on by right multiplication. For each in , the -double coset of is the set When , this is called the -double coset of . Equivalently, is the equivalence class of under the equivalence relation if and only if there exist in and in such that . The set of all -double cosets is denoted by Properties Suppose that is a group with subgroups and acting by left and right multiplication, respectively. The -double cosets of may be equivalently described as orbits for the product group acting on by . Many of the basic properties of double cosets follow immediately from the fact that they are orbits. However, because is a group and and are subgroups acting by multiplication, double cosets are more structured than orbits of arbitrary group actions, and they have additional properties that are false for more general actions. Two double cosets and are either disjoint or identical. is the disjoint union of its double cosets. There is a one-to-one correspondence between the two double coset spaces and given by identifying with . If , then . If , then . A double coset is a union of right cosets of and left cosets of ; specifically, The set of -double cosets is in bijection with the orbits , and also with the orbits under the mappings and respectively. If is normal, then is a group, and the right action of on this group factors through the right action of . It follows that . Similarly, if is normal, then . If is a normal subgroup of , then the -double cosets are in one-to-one correspondence with the left (and right) -cosets. Consider as the union of a -orbit of right -cosets. The stabilizer of the right -coset with respect to the right action of is . Similarly, the stabilizer of the left -coset with respect to the left action of is . It follows that the number of right cosets of contained in is the index and the number of left cosets of contained in is the index . Therefore If , , and are finite, then it also follows that Fix in , and let denote the double stabilizer }. Then the double stabilizer is a subgroup of . Because is a group, for each in there is precisely one in such that , namely ; however, may not be in . Similarly, for each in there is precisely one in such that , but may not be in . The double stabilizer therefore has the descriptions (Orbit–stabilizer theorem) There is a bijection between and under which corresponds to . It follows that if , , and are finite, then (Cauchy–Frobenius lemma) Let denote the elements fixed by the action of . Then In particular, if , , and are finite, then the number of double cosets equals the average number of points fixed per pair of group elements. There is an equivalent description of double cosets in terms of single cosets. Let and both act by right multiplication on . Then acts by left multiplication on the product of coset spaces . The orbits of this action are in one-to-one correspondence with . This correspondence identifies with the double coset . Briefly, this is because every -orbit admits representatives of the form , and the representative is determined only up to left multiplication by an element of . Similarly, acts by right multiplication on , and the orbits of this action are in one-to-one correspondence with the double cosets . Conceptually, this identifies the double coset space with the space of relative configurations of an -coset and a -coset. Additionally, this construction generalizes to the case of any number of subgroups. Given subgroups , the space of -multicosets is the set of -orbits of . The analog of Lagrange's theorem for double cosets is false. This means that the size of a double coset need not divide the order of . For example, let be the symmetric group on three letters, and let and be the cyclic subgroups generated by the transpositions and , respectively. If denotes the identity permutation, then This has four elements, and four does not divide six, the order of . It is also false that different double cosets have the same size. Continuing the same example, which has two elements, not four. However, suppose that is normal. As noted earlier, in this case the double coset space equals the left coset space . Similarly, if is normal, then is the right coset space . Standard results about left and right coset spaces then imply the following facts. for all in . That is, all double cosets have the same cardinality. If is finite, then . In particular, and divide . Examples Let be the symmetric group, considered as permutations of the set }. Consider the subgroup which stabilizes . Then consists of two double cosets. One of these is , and the other is for any permutation which does not fix . This is contrasted with , which has elements , where each . Let be the group , and let be the subgroup of upper triangular matrices. The double coset space is the Bruhat decomposition of . The double cosets are exactly , where ranges over all n-by-n permutation matrices. For instance, if , then Products in the free abelian group on the set of double cosets Suppose that is a group and that , , and are subgroups. Under certain finiteness conditions, there is a product on the free abelian group generated by the - and -double cosets with values in the free abelian group generated by the -double cosets. This means there is a bilinear function Assume for simplicity that is finite. To define the product, reinterpret these free abelian groups in terms of the group algebra of as follows. Every element of has the form where is a set of integers indexed by the elements of . This element may be interpreted as a -valued function on , specifically, . This function may be pulled back along the projection which sends to the double coset . This results in a function . By the way in which this function was constructed, it is left invariant under and right invariant under . The corresponding element of the group algebra is and this element is invariant under left multiplication by and right multiplication by . Conceptually, this element is obtained by replacing by the elements it contains, and the finiteness of ensures that the sum is still finite. Conversely, every element of which is left invariant under and right invariant under is the pullback of a function on . Parallel statements are true for and . When elements of , , and are interpreted as invariant elements of , then the product whose existence was asserted above is precisely the multiplication in . Indeed, it is trivial to check that the product of a left--invariant element and a right--invariant element continues to be left--invariant and right--invariant. The bilinearity of the product follows immediately from the bilinearity of multiplication in . It also follows that if is a fourth subgroup of , then the product of -, -, and -double cosets is associative. Because the product in corresponds to convolution of functions on , this product is sometimes called the convolution product. An important special case is when . In this case, the product is a bilinear function This product turns into an associative ring whose identity element is the class of the trivial double coset . In general, this ring is non-commutative. For example, if , then the ring is the group algebra , and a group algebra is a commutative ring if and only if the underlying group is abelian. If is normal, so that the -double cosets are the same as the elements of the quotient group , then the product on is the product in the group algebra . In particular, it is the usual convolution of functions on . In this case, the ring is commutative if and only if is abelian, or equivalently, if and only if contains the commutator subgroup of . If is not normal, then may be commutative even if is non-abelian. A classical example is the product of two Hecke operators. This is the product in the Hecke algebra, which is commutative even though the group is the modular group, which is non-abelian, and the subgroup is an arithmetic subgroup and in particular does not contain the commutator subgroup. Commutativity of the convolution product is closely tied to Gelfand pairs. When the group is a topological group, it is possible to weaken the assumption that the number of left and right cosets in each double coset is finite. The group algebra is replaced by an algebra of functions such as or , and the sums are replaced by integrals. The product still corresponds to convolution. For instance, this happens for the Hecke algebra of a locally compact group. Applications When a group has a transitive group action on a set , computing certain double coset decompositions of reveals extra information about structure of the action of on . Specifically, if is the stabilizer subgroup of some element , then decomposes as exactly two double cosets of if and only if acts transitively on the set of distinct pairs of . See 2-transitive groups for more information about this action. Double cosets are important in connection with representation theory, when a representation of is used to construct an induced representation of , which is then restricted to . The corresponding double coset structure carries information about how the resulting representation decomposes. In the case of finite groups, this is Mackey's decomposition theorem. They are also important in functional analysis, where in some important cases functions left-invariant and right-invariant by a subgroup can form a commutative ring under convolution: see Gelfand pair. In geometry, a Clifford–Klein form is a double coset space , where is a reductive Lie group, is a closed subgroup, and is a discrete subgroup (of ) that acts properly discontinuously on the homogeneous space . In number theory, the Hecke algebra corresponding to a congruence subgroup of the modular group is spanned by elements of the double coset space ; the algebra structure is that acquired from the multiplication of double cosets described above. Of particular importance are the Hecke operators corresponding to the double cosets or , where (these have different properties depending on whether and are coprime or not), and the diamond operators given by the double cosets where and we require (the choice of does not affect the answer). References Group theory
Double coset
[ "Mathematics" ]
2,295
[ "Group theory", "Fields of abstract algebra" ]
982,822
https://en.wikipedia.org/wiki/Coagulase
Coagulase is a protein enzyme produced by several microorganisms that enables the conversion of fibrinogen to fibrin. In the laboratory, it is used to distinguish between different types of Staphylococcus isolates. Importantly, S. aureus is generally coagulase-positive, meaning that a positive coagulase test would indicate the presence of S. aureus or any of the other 11 coagulase-positive Staphylococci. A negative coagulase test would instead show the presence of coagulase-negative organisms such as S. epidermidis or S. saprophyticus. However, it is now known that not all S. aureus are coagulase-positive. Whereas coagulase-positive staphylococci are usually pathogenic, coagulase-negative staphylococci are more often associated with opportunistic infection. It is also produced by Yersinia pestis. Coagulase reacts with prothrombin in the blood. The resulting complex is called staphylothrombin, which enables the enzyme to act as a protease to convert fibrinogen, a plasma protein produced by the liver, to fibrin. This results in clotting of the blood. Coagulase is tightly bound to the surface of the bacterium S. aureus and can coat its surface with fibrin upon contact with blood. The fibrin clot may protect the bacterium from phagocytosis and isolate it from other defenses of the host. The fibrin coat can therefore make the bacteria more virulent. Bound coagulase is part of the larger family of MSCRAMM adhesin proteins. Coagulase test The coagulase test has traditionally been used to differentiate Staphylococcus aureus from coagulase-negative staphylococci. S.aureus produces two forms of coagulase (i.e., bound coagulase and free coagulase). Bound coagulase, otherwise known as "clumping factor", can be detected by carrying out a slide coagulase test, and free coagulase can be detected using a tube coagulase test. Slide test A slide coagulase test is run with a negative control to rule out autoagglutination. Two drops of saline are put onto the slide labeled with sample number, Test (T) and control (C). The two saline drops are emulsified with the test organism using a wire loop, straight wire, or wooden stick. A drop of plasma (rabbit plasma anticoagulated with EDTA is recommended) is placed on the inoculated saline drop corresponding to test, and mixed well, then the slide is rocked gently for about 10 seconds. If 'positive', macroscopic clumping would be observed in the plasma within 10 seconds, with no clumping in the saline drop. If 'negative', no clumping will be observed. If the slide coagulase test is negative, a tube test should follow as a confirmation. Clumping in both drops is an indication of autoagglutination, so a tube test should be carried out. Tube test is not performed each institutions but most of the result depends on blood cultures from lab. Tube test The tube test uses rabbit plasma that has been inoculated with a staphylococcal colony (i.e., Gram-positive cocci which are catalase positive). The tube is then incubated at 37 °C for 1.5 hours. If negative, then incubation is continued up to 18 hours. If 'positive' (e.g., the suspect colony is S. aureus), the plasma will coagulate, resulting in a clot (sometimes the clot is so pronounced, the liquid will completely solidify). If 'negative', the plasma remains a liquid. The negative result may be S. epidermidis but only a more detailed identification test can confirm this, using biochemical tests as in analytical profile index tests methods. A false negative can be perceived if the sample is not allowed to cool for about 30 minutes at room temperature or 10 minutes in the freezer, given that the serum can melt. If truly negative, the serum will remain liquid after cooling. List of coagulase-positive staphylococci: Staphylococcus aureus subsp. anaerobius, S. aureus subsp. aureus, S. delphini, S. hyicus, S. intermedius, S. lutrae, and Staphylococcus schleiferi subsp. coagulans. List of coagulase-negative staphylococci of clinical significance: S. saprophyticus, S.cohnii subsp. cohnii, S. cohnii subsp. urealyticum, S. captitus subsp. captitus, S. warneri, S.hominis, S.epidermidis, S. caprae, and S.lugdunensis References External links Tube coagulase test - rabbit plasma video Coagulase test procedure Microbiology EC 3.4.23
Coagulase
[ "Chemistry", "Biology" ]
1,086
[ "Microbiology", "Microscopy" ]
982,970
https://en.wikipedia.org/wiki/Mertens%27%20theorems
In analytic number theory, Mertens' theorems are three 1874 results related to the density of prime numbers proved by Franz Mertens. In the following, let mean all primes not exceeding n. First theorem Mertens' first theorem is that does not exceed 2 in absolute value for any . () Second theorem Mertens' second theorem is where M is the Meissel–Mertens constant (). More precisely, Mertens proves that the expression under the limit does not in absolute value exceed for any . Proof The main step in the proof of Mertens' second theorem is where the last equality needs which follows from . Thus, we have proved that . Since the sum over prime powers with converges, this implies . A partial summation yields . Changes in sign In a paper on the growth rate of the sum-of-divisors function published in 1983, Guy Robin proved that in Mertens' 2nd theorem the difference changes sign infinitely often, and that in Mertens' 3rd theorem the difference changes sign infinitely often. Robin's results are analogous to Littlewood's famous theorem that the difference π(x) − li(x) changes sign infinitely often. No analog of the Skewes number (an upper bound on the first natural number x for which π(x) > li(x)) is known in the case of Mertens' 2nd and 3rd theorems. Relation to the prime number theorem Regarding this asymptotic formula Mertens refers in his paper to "two curious formula of Legendre", the first one being Mertens' second theorem's prototype (and the second one being Mertens' third theorem's prototype: see the very first lines of the paper). He recalls that it is contained in Legendre's third edition of his "Théorie des nombres" (1830; it is in fact already mentioned in the second edition, 1808), and also that a more elaborate version was proved by Chebyshev in 1851. Note that, already in 1737, Euler knew the asymptotic behaviour of this sum. Mertens diplomatically describes his proof as more precise and rigorous. In reality none of the previous proofs are acceptable by modern standards: Euler's computations involve the infinity (and the hyperbolic logarithm of infinity, and the logarithm of the logarithm of infinity!); Legendre's argument is heuristic; and Chebyshev's proof, although perfectly sound, makes use of the Legendre-Gauss conjecture, which was not proved until 1896 and became better known as the prime number theorem. Mertens' proof does not appeal to any unproved hypothesis (in 1874), and only to elementary real analysis. It comes 22 years before the first proof of the prime number theorem which, by contrast, relies on a careful analysis of the behavior of the Riemann zeta function as a function of a complex variable. Mertens' proof is in that respect remarkable. Indeed, with modern notation it yields whereas the prime number theorem (in its simplest form, without error estimate), can be shown to imply In 1909 Edmund Landau, by using the best version of the prime number theorem then at his disposition, proved that holds; in particular the error term is smaller than for any fixed integer k. A simple summation by parts exploiting the strongest form known of the prime number theorem improves this to for some . Similarly a partial summation shows that is implied by the PNT. Third theorem Mertens' third theorem is where γ is the Euler–Mascheroni constant (). Relation to sieve theory An estimate of the probability of () having no factor is given by This is closely related to Mertens' third theorem which gives an asymptotic approximation of References Further reading Yaglom and Yaglom Challenging mathematical problems with elementary solutions Vol 2, problems 171, 173, 174 External links Mathematical series Summability theory Theorems about prime numbers
Mertens' theorems
[ "Mathematics" ]
819
[ "Sequences and series", "Mathematical structures", "Series (mathematics)", "Calculus", "Theorems about prime numbers", "Theorems in number theory" ]
983,002
https://en.wikipedia.org/wiki/Broadcast%20automation
Broadcast automation incorporates the use of broadcast programming technology to automate broadcasting operations. Used either at a broadcast network, radio station or a television station, it can run a facility in the absence of a human operator. They can also run in a live assist mode when there are on-air personnel present at the master control, television studio or control room. The radio transmitter end of the airchain is handled by a separate automatic transmission system (ATS). History Originally, in the US, many (if not most) broadcast licensing authorities required a licensed board operator to run every station at all times, meaning that every DJ had to pass an exam to obtain a license to be on-air, if their duties also required them to ensure proper operation of the transmitter. This was often the case on overnight and weekend shifts when there was no broadcast engineer present, and all of the time for small stations with only a contract engineer on call. In the U.S., it was also necessary to have an operator on duty at all times in case the Emergency Broadcast System (EBS) was used, as this had to be triggered manually. While there has not been a requirement to relay any other warnings, any mandatory messages from the U.S. president would have had to first be authenticated with a code word sealed in a pink envelope sent annually to stations by the Federal Communications Commission (FCC). Gradually, the quality and reliability of electronic equipment improved, regulations were relaxed, and no operator had to be present (or even available) while a station was operating. In the U.S., this came about when the EAS replaced the EBS, starting the movement toward automation to assist, and sometimes take the place of, the live disc jockeys (DJs) and radio personalities. in 1999, The Weather Channel launched Weatherscan Local, a cable television channel that broadcast uninterrupted live local weather information and forecasts. Weatherscan Local became Weatherscan in 2003 but was shut down in 2022. Early analog systems Early automation systems were electromechanical systems which used relays. Later systems were "computerized" only to the point of maintaining a schedule, and were limited to radio rather than TV. Music would be stored on reel-to-reel audio tape. Subaudible tones on the tape marked the end of each song. The computer would simply rotate among the tape players until the computer's internal clock matched that of a scheduled event. When a scheduled event would be encountered, the computer would finish the currently-playing song and then execute the scheduled block of events. These events were usually advertisements, but could also include the station's top-of-hour station identification, news, or a bumper promoting the station or its other shows. At the end of the block, the rotation among tapes resumed. Advertisements, jingles, and the top-of-hour station identification required by law were commonly stored on Fidelipac endless-loop tape cartridges, known colloquially as "carts". These were similar to the consumer four-track tapes sold under the Stereo-Pak brand, but had only two tracks and were usually recorded and played at 7.5 tape inches per second (in/s) compared to Stereo-Pak's slower 3.75 in/s. The carts had a slot for a pinch roller on a spindle which was activated by solenoid upon pressing the start button on the cart machine. Because the capstan was already spinning at full speed, tape playback commenced without delay or any audible "run-up". Mechanical carousels would rotate the carts in and out of multiple tape players as dictated by the computer. Time announcements were provided by a pair of dedicated cart players, with the even minutes stored on one and the odd minutes on the other, meaning an announcement would always be ready to play even if the minute was changing when the announcement was triggered. The system did require attention throughout the day to change reels as they ran out and reload carts, and thus became obsolete when a method was developed to automatically rewind and re-cue the reel tapes when they ran out, extending 'walk-away' time indefinitely. Radio station WIRX may have been one of the world's first completely automated radio stations, built and designed by Brian Jeffrey Brown in 1963 when Brown was only 10 years old. The station broadcast in a classical format, called "More Good Music (MGM)" and featured five-minute bottom-of-the-hour news feeds from the Mutual Broadcasting System. The heart of the automation was an 8 x 24 telephone stepping relay which controlled two reel-to-reel tape decks, one twelve inch Ampex machine providing the main program audio and a second RCA seven inch machine providing "fill" music. The tapes played by these machines were originally produced in the Midwest Family Broadcasting (MWF) Madison, Wisconsin production facility by WSJM Chief Engineer Richard E. McLemore (and later in-house at WSJM) with sub-audible tones used to signal the end of a song. The stepping relay was programmed by slide switches in the front of the two relay racks which housed the equipment. The news feeds were triggered by a microswitch which was attached to a Western Union clock and tripped by the minute hand of the clock, then reset the stepping relay. Originally, 30-minute station identification was accomplished by a simulcast switch in the control booth for sister station WSJM, whereupon the disc jockey in the booth would announce "This is WSJM-AM and... (then pressing the momentary contact button) ...WSJM-FM, St. Joseph, Michigan." This only lasted about six months, however, and a standard tape cartridge player was wired in to announce the station identification and triggered by the Western Union clock. A different technology appeared in 1980 with the analog recorders made by Solidyne, which used a computer-controlled tape positioning system. Four GMS 204 units were controlled from a 6809 microprocessor, with the program stored in a solid-state plug-in memory module. This system has a limited programming time of about eight hours. Satellite programming often used audible dual-tone multi-frequency (DTMF) signals to trigger events at affiliate stations. This allowed the automatic local insertion of ads and station IDs. Because there are 12 (or 16) tone pairs, and typically four tones were sent in rapid succession (less than one second), more events could be triggered than by sub-audible tones (usually 25 Hz and 35 Hz). Modern digital systems Modern systems run on hard disk, where all of the music, jingles, advertisements, voice tracks, and other announcements are stored. These audio files may be either compressed or uncompressed, or often with only minimal compression as a compromise between file size and quality. For radio software, these disks are usually in computers, sometimes running their own custom operating systems, but more often running as an application on a PC operating system. Scheduling was an important advance of these systems, allowing for exact timing. Some systems use GPS satellite receivers to obtain exact atomic time, for perfect synchronization with satellite-delivered programming. Reasonably-accurate timekeeping can also be obtained with the use of Internet Protocols (IP) like Network Time Protocol (NTP). Automation systems are also more interactive than ever before with digital audio workstation (DAW) with console automation and can even record from a telephone hybrid to play back an edited conversation with a telephone caller. This is part of a system's live-assist mode. The use of automation software and voice tracks to replace live DJs is a current trend in radio broadcasting, done by many Internet radio and adult hits stations. Stations can even be voice-tracked from another city far away, now often delivering sound files over the Internet. In the U.S., this is a common practice under controversy for making radio more generic and artificial. Having local content is also touted as a way for traditional stations to compete with satellite radio, where there may be no radio personality on the air at all. A commercially available, for-sale product named Audicom was introduced by Oscar Bonello in 1989. It is based on psychoacoustic lossy compression, the same principle being used in most modern lossy audio encoders such as MP3 and Advanced Audio Coding (AAC), and it allowed both broadcast automation and recording to hard drives. Television In television, playout automation is also becoming more practical as the storage space of hard drives increases. Television shows and television commercials, as well as digital on-screen graphics (DOG or BUG), can all be stored on video servers remotely controlled by computers utilizing the 9-Pin Protocol and the Video Disk Control Protocol (VDCP). These systems can be very extensive, tied-in with parts that allow the "ingest" (as it is called in the industry) of video from satellite networks and electronic news gathering (ENG) operations and management of the video library, including archival of footage for later use. In ATSC, Programming Metadata Communication Protocol (PMCP) is then used to pass information about the video through the airchain to Program and System Information Protocol (PSIP), which transmits the current electronic program guide (EPG) information over digital television to the viewer. See also Audicom Centralcasting Community radio Emergency Alert System Fidelipac Local insertion Playout Radio software Station identification References Broadcast engineering Broadcasting Television terminology Video storage
Broadcast automation
[ "Engineering" ]
1,934
[ "Broadcast engineering", "Electronic engineering" ]
983,022
https://en.wikipedia.org/wiki/Gliese%2065
Gliese 65, also known as Luyten 726-8, is a binary star system that is one of Earth's nearest neighbors, at from Earth in the constellation Cetus. The two component stars are both flare stars with the variable star designations BL Ceti and UV Ceti. Star system The star system was discovered in 1948 by Willem Jacob Luyten in the course of compiling a catalog of stars of high proper motion; he noted its exceptionally high proper motion of 3.37 arc seconds annually and cataloged it as Luyten . The two stars are of nearly equal brightness, with visual magnitudes of 12.7 and 13.2 as seen from Earth. They orbit one another every 26.5 years. The distance between the two stars varies from . The Gliese 65 system is approximately from Earth's Solar System, in the constellation Cetus, and is thus the seventh-closest star system to Earth. Its own nearest neighbor is Tau Ceti, away from it. If km/s then approximately 28,700 years ago Gliese 65 was at its minimal distance of 2.21 pc (7.2 ly) from the Sun. Gliese 65 A was later found to be a variable star and given the variable star designation BL Ceti. It is a red dwarf of spectral type M5.5V. It is also a flare star, and classified as a UV Ceti variable type, but it is not nearly as remarkable or extreme in its behavior as its companion star UV Ceti. Soon after the discovery of Gliese 65 A, the companion star Gliese 65 B was discovered. Like Gliese 65 A, this star was also found to be variable and given the variable star designation UV Ceti. Although UV Ceti was not the first flare star discovered, it is the most prominent example of such a star, so similar flare stars are now classified as UV Ceti type variable stars. This star goes through fairly extreme changes of brightness: for instance, in 1952, its brightness increased by 75 times in only 20 seconds. UV Ceti is a red dwarf of spectral type M6V. Both stars are listed as spectral standard stars for their respective classes, being considered typical examples of the classes. In approximately 31,500 years, Gliese 65 will have a close encounter with Epsilon Eridani at the minimal distance of about 0.93 ly. Gliese 65 can penetrate a conjectured Oort cloud about Epsilon Eridani, which may gravitationally perturb some long-period comets. The duration of mutual transit of two star systems within 1 ly from each other is about 4,600 years. Gliese 65 is a possible member of the Hyades Stream. Candidate planet In 2024, a candidate super-Neptune-mass planet was detected in the Gliese 65 system via astrometry with Very Large Telescope's GRAVITY instrument. If it exists, it would orbit one of the two stars (it is unclear which) with a period of 156 days. The planet's properties change slightly depending on which star it orbits, but in general its mass is estimated to be about and the semi-major axis is about 30% of an astronomical unit. It is estimated to be about seven times the size of Earth based on mass-radius relationships. Notes References Further reading External links http://www.aavso.org/vstar/vsots/fall03.shtml http://www.solstation.com/stars/luy726-8.htm M-type main-sequence stars BY Draconis variables Ceti, UV Binary stars Hyades Stream Local Bubble Cetus 0065 Ceti, BL UV Hypothetical planetary systems
Gliese 65
[ "Astronomy" ]
775
[ "Cetus", "Constellations" ]