id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
7,528,305
https://en.wikipedia.org/wiki/Farrow%20%26%20Ball
Farrow & Ball is a British manufacturer of paints and wallpapers largely based upon historic colour palettes and archives. The company is particularly well known for the unusual names of its products. History The company was started by John Farrow and Richard Maurice Ball in 1946 in Wimborne Minster, Dorset. Both Farrow and Ball had previously been chemists. The company was the first manufacturer to switch to the production of 100 per cent water-based paints in 2010. Products Paint Farrow & Ball maintains an updated colour card of 132 colours, plus 12 new paint colours and 3 new wallpaper patterns created with Christopher John Rogers. The company has worked with the National Trust in formulating near-exact matches of colours used in the restoration of the interiors and exteriors of historic buildings. Wallpaper Farrow & Ball produces wallpaper patterns made using traditional block, trough and roller methods with the company's own paint. Books Farrow & Ball has produced several books; the British National Bibliography contains the following records: Paint and Colour in Decoration (2003) Farrow & Ball The Art of Colour (2007) Farrow & Ball Living with Colour (2010) Farrow & Ball Decorating with Colour (2013) Farrow & Ball How to Decorate (2016) Farrow & Ball Recipes for Decorating (2019) Showrooms and stockists The company has 63 showrooms across the UK, US, Canada and Europe, as well as a global network of stockists carrying both paint and wallpaper. In popular culture Farrow & Ball has been lampooned in the US for its expense and preparation requirements on NBC's Saturday Night Live. In 2021, Channel 5 broadcast a one episode documentary about Farrow & Ball entitled Farrow & Ball: Inside the Posh Paint Factory. Corporate information Ownership In 2006, American Capital subsidiary European Capital Limited purchased Farrow & Ball for approximately £80 million by way of a management buyout. Until its sale to European Capital Limited, Farrow & Ball remained a family business. In 2014, Ares Management bought Farrow & Ball from European Capital Limited for £275 million. In October 2020, Bloomberg reported that Ares Management was considering a potential sale of Farrow & Ball. In May 2021, the Financial Times reported that Danish coatings manufacturer Hempel had agreed to purchase Farrow & Ball from Ares Management for approximately £500 million. The sale was expected to complete in the second half of 2021. On 26 August 2021, the Competition and Markets Authority (CMA) completed the phase one investigation it launched on 9 July 2021 and cleared the merger. Following clearance by the CMA, the sale of Farrow & Ball by Ares Management to Hempel completed on 3 September 2021. Financial information References Bibliography Friedman, Joseph. Paint and Color in Decoration. Rizzoli New York: 2003. . External links Color space Paint and coatings companies of the United Kingdom Color Historic preservation Interior design English brands Chemical companies established in 1946 British companies established in 1946 1946 establishments in England Companies based in Dorset
Farrow & Ball
Mathematics
609
4,164,558
https://en.wikipedia.org/wiki/Power%20symbol
A power symbol is a symbol indicating that a control activates or deactivates a particular device. Such a control may be a rocker switch, a toggle switch, a push-button, a virtual switch on a display screen, or some other user interface. The internationally standardized symbols are intended to communicate their function in a language-independent manner. Description The well-known on/off power symbol was the result of evolution in user interface design. Originally, most early power controls consisted of switches that were toggled between two states demarcated by the words On and Off. As technology became more ubiquitous, these English words were replaced with the symbols line "|" for "on" and circle "◯" for "off" (typically without serifs) to bypass language barriers. This standard is still used on toggle power switches, sometimes in the format "I/O". The symbol for the standby button was created by superimposing the symbols "|" and "◯"; however, it is commonly interpreted as the numerals "0" and "1" (binary code); yet, the International Electrotechnical Commission (IEC) holds these symbols as a graphical representation of a line and a circle. Standby symbol ambiguity Because the exact meaning of the standby symbol on a given device may be unclear until the control is tried, it has been proposed that a separate sleep symbol, a crescent moon, instead be used to indicate a low power state. Proponents include the California Energy Commission and the Institute of Electrical and Electronics Engineers. Under this proposal, the older standby symbol would be redefined as a generic "power" indication, in cases where the difference between it and the other power symbols would not present a safety concern. This alternative symbolism was published as IEEE standard 1621 on December 8, 2004. Standards Universal power symbols are described in the International Electrotechnical Commission (IEC) 60417 standard, Graphical symbols for use on equipment, appearing in the 1973 edition of the document (as IEC 417) and informally used earlier. Unicode Because of widespread use of the power symbol, a campaign was launched by Terence Eden to add the set of characters to Unicode. In February 2015, the proposal was accepted by Unicode and the characters were included in Unicode 9.0. The characters are in the "Miscellaneous Technical" block, with code points 23FB-FE, with the exception of , which belongs to the "Miscellaneous Symbols and Arrows" block. In popular culture The standby symbol, frequently seen on personal computers, is a popular icon among technology enthusiasts. It is often found emblazoned on fashion items including t-shirts and cuff-links. It has also been used in corporate logos, such as for Gateway, Inc. (circa 2002), Staples, Inc. easytech, Exelon, Toggl and others, as record sleeve art (Garbage's "Push It") and even as personal tattoos. In March 2010, the New York City health department announced they would be using it on condom wrappers. The 2012 television series Revolution, set in a dystopian future in which "the power went out", as the opening narration puts it, stylized the second letter 'o' of its title as the standby symbol. The power symbol was a part of exhibition at MoMA. In the anime Dimension W, Kyouma Mabuchi wears a Happi with the power symbol on his back. In the television series Sense8, the hacktivist character Nomi has a tattoo of the power symbol behind her ear. The symbol, rotated clockwise by 90 degrees so it looks like a capital G, becomes part of the logo for Channel 5's programme The Gadget Show. On 15 October 2019, 786 employees of Volkswagen Group United Kingdom Limited formed the world's largest human power symbol at Millbrook Proving Ground. See also List of international common standards Reset button References External links IEC/ISO Database on Graphical Symbols for Use on Equipment IEC Graphical Symbols for Use on Equipment ISO/IEC/JTC1 Graphical Symbols for Office Equipment, Environmental Energy Technologies Division, Lawrence Berkeley National Laboratory IEC standards IEEE standards Pictograms
Power symbol
Mathematics,Technology
853
602,016
https://en.wikipedia.org/wiki/TreadMarks
TreadMarks is a distributed shared memory system created at Rice University in the 1990s. References External links TreadMarks official site Distributed computing architecture
TreadMarks
Technology,Engineering
28
12,903
https://en.wikipedia.org/wiki/Gegenschein
Gegenschein (; ; ) or counterglow is a faintly bright spot in the night sky centered at the antisolar point. The backscatter of sunlight by interplanetary dust causes this optical phenomenon, being a zodiacal light and part of its zodiacal light band. Explanation Like zodiacal light, gegenschein is sunlight scattered by interplanetary dust. Most of this dust orbits the Sun near the ecliptic plane, with a possible concentration of particles centered at the point of the Earth–Sun system. Gegenschein is distinguished from zodiacal light by its high angle of reflection of the incident sunlight on the dust particles. It forms a slightly brighter elliptical spot of 8–10° across directly opposite the Sun within the dimmer band of zodiacal light and zodiac constellation. The intensity of the gegenschein is relatively enhanced because each dust particle is seen at full phase, having a difficult to measure apparent magnitude of +5 to +6, with a very low surface brightness in the +10 to +12 magnitude range. History It is commonly stated that the gegenschein was first described by the French Jesuit astronomer and professor (1692–1776) in 1730. Further observations were supposedly made by the German explorer Alexander von Humboldt during his South American journey from 1799 to 1803. It was Humboldt who first used the German term Gegenschein. However, research conducted in 2021 by Texas State University astronomer and professor Donald Olson discovered that the Danish astronomer Theodor Brorsen was actually the first person to observe and describe one in 1854, although Brorsen had thought that Pézenas had observed it first. Olson believes what Pézenas actually observed was an auroral event, as he described the phenomenon as having a red glow; Olson found many other reports of auroral activity from around Europe and Asia on the same date Pézenas made his observation. Humboldt's report instead described glowing triangular patches on both the western and eastern horizons shortly after sunset, while true gegenschein is most visible near local midnight when it is highest in the sky. Brorsen published the first thorough investigations of the gegenschein in 1854. T. W. Backhouse discovered it independently in 1876, as did Edward Emerson Barnard in 1882. In modern times, the gegenschein is not visible in most inhabited regions of the world due to light pollution. See also Antisolar point Earth's shadow Heiligenschein Interplanetary dust cloud Kordylewski cloud Opposition surge, the apparent brightening of a coarse surface or an aggregate of many particles when illuminated from directly behind the observer Sylvanshine References External links Gegenschein page on EarthSky.org Photos of gegenschein on SwissEduc.ch taken from Stromboli volcano "Zodiacal Light and the Gegenschein", an essay by J. E. Littleton Observational astronomy Optical phenomena German words and phrases de:Gegenschein
Gegenschein
Physics,Astronomy
606
58,229,069
https://en.wikipedia.org/wiki/United%20Nations%20Framework%20Classification%20for%20Resources
United Nations Framework Classification for Resources (UNFC) is an international scheme for the classification, management and reporting of energy, mineral, and raw material resources. United Nations Economic Commission for Europe's (UNECE) Expert Group on Resource Management (EGRM) is responsible for the development promotion and further development of UNFC. Development Classification and management of natural resources such as minerals and petroleum are classified using differing schemes. In 1997, UNECE published the United Nations Framework Classification for Reserves and Resources of Solid Fuels and Mineral Commodities (UNFC-1997) as a unifying international system for classifying solid minerals and fuels. In 2004, the Classification was revised to include petroleum (oil and natural gas) and uranium and renamed the UNFC for Fossil Energy and Mineral Resources 2004 (UNFC-2004). In 2009, a simplified United Nations Framework Classification for Fossil Energy and Mineral Reserves and Resources 2009 (UNFC-2009) was published. In response to the application of UNFC being extended to renewable energy, injection projects for geological storage and anthropogenic resources, the name was changed in 2017 to the United Nations Framework Classification for Resources (UNFC). An updated version of UNFC, with improved terminology, was released in 2019. Application The UNFC system is used for: Policy formulation in energy and raw material studies National resources management functions Corporate business processes Financial reporting UNFC currently applies to minerals, petroleum, renewable energy, nuclear fuel resources, injection projects for geological storage, and anthropogenic resources. Application of UNFC to groundwater resources is being evaluated. Implementation UNFC has been adopted as the basis of national resource classification in many countries including China, India, Mexico, Poland and Ukraine. African Union Commission has developed a UNFC-based African Mineral and Energy Resources Classification and Management System (AMREC) as a unifying system for Africa. AMREC includes a Pan African Resource Reporting Code (PARC 2023). European Commission is using UNFC to classify and report raw material resources of Europe and mandated the same in the Critical Raw Materials Act. References See also United Nations Resource Management System External links UNFC on UNECE website Minerals Natural resource management Petroleum Renewable energy Resource economics Resource extraction
United Nations Framework Classification for Resources
Chemistry
446
39,014,574
https://en.wikipedia.org/wiki/SN%20UDS10Wil
SN UDS10Wil (SN Wilson) is a type Ia supernova, and as of April 2013, the farthest known. It has a redshift of 1.914, which strongly implies that it exploded when the universe was about a third of its current size. It was discovered with the Hubble Space Telescope's Wide Field Camera 3. The nickname SN Wilson is after the American President Woodrow Wilson. See also List of most distant supernovae List of the most distant astronomical objects References Supernovae 20130402 Cetus
SN UDS10Wil
Chemistry,Astronomy
116
2,302,400
https://en.wikipedia.org/wiki/Asepsis
Asepsis is the state of being free from disease-causing micro-organisms (such as pathogenic bacteria, viruses, pathogenic fungi, and parasites). There are two categories of asepsis: medical and surgical. The modern day notion of asepsis is derived from the older antiseptic techniques, a shift initiated by different individuals in the 19th century who introduced practices such as the sterilizing of surgical tools and the wearing of surgical gloves during operations. The goal of asepsis is to eliminate infection, not to achieve sterility. Ideally, a surgical field is sterile, meaning it is free of all biological contaminants (e.g. fungi, bacteria, viruses), not just those that can cause disease, putrefaction, or fermentation. Even in an aseptic state, a condition of sterile inflammation may develop. The term often refers to those practices used to promote or induce asepsis in an operative field of surgery or medicine to prevent infection. History The modern concept of asepsis evolved in the 19th century through multiple individuals. Ignaz Semmelweis showed already in 1847-1848 that hand washing prior to delivery reduced puerperal fever. Despite this, many hospitals continued to practice surgery in unsanitary conditions, with some surgeons taking pride in their bloodstained operating gowns. Only a decade later the situation started to change, when some French surgeons started to adopt carbolic acid as an antiseptic, reducing surgical infection rates, followed by their Italian colleagues in the 1860s. In 1867 Joseph Lister explained this reduction by Louis Pasteur's germ theory and popularized the disinfectant in the English-speaking world. shifted the movement then from antisepsis to asepsis in the 1870s, publishing his findings in 1879. Gustav Adolf Neuber introduced sterile gowns and capes in 1883, and in 1891, Ernst von Bergmann introduced the autoclave, a device used for the practice of the sterilization of surgical instruments. Rubber gloves were pioneered by William Halsted, who also implemented a no street clothes policy in his operating room, opting to wear a completely white, sterile uniform consisting of a duck suit, tennis shoes, and skullcap. This helped to prevent the introduction of infections into open wounds. Additionally, Halsted would sterilize the operation site with disinfectants and use drapes to cover all areas except for the site. In his department at Johns Hopkins Hospital, he enforced an extreme hand washing ritual consisting of soaking in harmfully strong chemicals like permanganate and mercury bichloride solution as well as scrubbing with stiff brushes. The damage to a surgical nurse's hands compelled him to create the earliest form of the surgical gloves with the Goodyear Rubber Company. These gloves became a part of the aseptic surgery standard when Dr. Joseph Colt Bloodgood and several others began wearing them for that particular purpose. Antisepsis vs. asepsis The line between antisepsis and asepsis is interpreted differently, depending on context and time. In the past, antiseptic operations occurred in people's homes or in operating theaters before a large crowd. Procedures for implementing antisepsis varied among physicians and experienced constant changes. Until the late 19th century, physicians rejected the connection between Louis Pasteur's germ theory that bacteria caused diseases and antiseptic techniques. At the end of the 19th century, Joseph Lister and his followers expanded the term "antisepsis" and coined "asepsis", with the justification that Lister had initially "suggested excluding septic agents from the wound from the start." Generally, however, asepsis is seen as a continuation of antisepsis since many of the values are the same, such as a "germ-free environment around the wound or patient", and techniques pioneered under both names are used in conjunction today. Method Asepsis refers to any procedure that is performed under sterile conditions. This includes medical and laboratory techniques (such as with bacterial cultures). There are two types of asepsis — medical and surgical. Medical or clean asepsis reduces the number of organisms and prevents their spread; surgical or sterile asepsis includes procedures to eliminate micro-organisms from an area and is practiced by surgical technologists and nurses. Ultimately, though, successful usage of aseptic operations depends on a combination of preparatory actions. For example, sterile equipment and fluids are used during invasive medical and nursing procedures. The largest manifestation of such aseptic techniques is in hospital operating theaters, where the aim is to keep patients free from hospital micro-organisms. While all members of the surgical team should demonstrate good aseptic technique, it is the role of the scrub nurse or surgical technologist to set up and maintain the sterile field. To prevent cross-contamination of patients, instruments are sterilized through autoclaving or by using disposable equipment; suture material or xenografts also need to be sterilized beforehand. Basic aseptic procedures includes hand washing, donning protective gloves, masks and gowns, and sterilizing equipment and linens. Medical aseptic techniques also includes curbing the spread of infectious diseases through quarantine, specifically isolation procedures based on the mode of disease transmission. Within contact, droplet and airborne isolation methods, two different procedures emerge: strict isolation vs. reverse isolation. Strict isolation quarantines patients to prevent them from infecting others, while reverse isolation prevents vulnerable patients from becoming infected. Related infections In aseptic conditions, a "chronic low-level inflammation" known as sterile inflammation may develop as a result of trauma, stress, or environmental factors. As in infections caused by pathogens or microbes, the immune response is regulated by host receptors. Tissue damage resulting from non-infectious means are caused by DAMPs molecules released after injury or cell death has occurred, which are able to stimulate inflammation response. Diseases associated with sterile inflammation include Alzheimer's disease, atherosclerosis, as well as cancer tumor growth due to "immune cell infiltration." Additionally, aseptic tissue damage may arise from corticosteroid injections, which are drugs used to treat musculoskeletal conditions such as carpal tunnel and osteoarthritis, though this tends to result from improper aseptic technique. Despite efforts to preserve asepsis during surgery, there still persists a 1-3% chance of a surgical site infection (SSI). Infections are categorized as superficial incisional, deep incisional, or organ; the first type are confined to the skin, the second to muscles and nearby tissues, and the third to organs not anatomically close to the operation site. The exact modes of infection depend on the types of surgery, but the most common bacteria that are responsible for SSIs are Staphylococcus aureus, coagulase-negative staphylococci, Escherichia coli, and Enterococcus spp. The CDC emphasizes the importance of both antiseptic and aseptic approaches in avoiding SSIs, especially since Staphylococcus aureus, among other bacteria, are able to evolve drug-resistant strains that can be difficult to treat. In 2017, nearly 20,000 patients in the United States died from Staphylococcus aureus in comparison to the 16,350 from diagnosed HIV. See also Antiseptic Barrier nursing Body substance isolation Cleanliness Contamination control Disinfectant (measurements of effectiveness) Ignaz Semmelweis Sterilization (microbiology) Transmission-based precautions References Surgery Antiseptics Medical hygiene Microbiology techniques Sterilization (microbiology)
Asepsis
Chemistry,Biology
1,577
1,376,474
https://en.wikipedia.org/wiki/Timing%20failure
Timing failure is a failure of a process, or part of a process, in a synchronous distributed system or real-time system to meet limits set on execution time, message delivery, clock drift rate, or clock skew. Asynchronous distributed systems cannot be said to have timing failures as guarantees are not provided for response times. References Distributed computing problems Real-time computing
Timing failure
Mathematics,Technology
81
200,550
https://en.wikipedia.org/wiki/Baire%20space
In mathematics, a topological space is said to be a Baire space if countable unions of closed sets with empty interior also have empty interior. According to the Baire category theorem, compact Hausdorff spaces and complete metric spaces are examples of Baire spaces. The Baire category theorem combined with the properties of Baire spaces has numerous applications in topology, geometry, and analysis, in particular functional analysis. For more motivation and applications, see the article Baire category theorem. The current article focuses more on characterizations and basic properties of Baire spaces per se. Bourbaki introduced the term "Baire space" in honor of René Baire, who investigated the Baire category theorem in the context of Euclidean space in his 1899 thesis. Definition The definition that follows is based on the notions of meagre (or first category) set (namely, a set that is a countable union of sets whose closure has empty interior) and nonmeagre (or second category) set (namely, a set that is not meagre). See the corresponding article for details. A topological space is called a Baire space if it satisfies any of the following equivalent conditions: Every countable intersection of dense open sets is dense. Every countable union of closed sets with empty interior has empty interior. Every meagre set has empty interior. Every nonempty open set is nonmeagre. Every comeagre set is dense. Whenever a countable union of closed sets has an interior point, at least one of the closed sets has an interior point. The equivalence between these definitions is based on the associated properties of complementary subsets of (that is, of a set and of its complement ) as given in the table below. Baire category theorem The Baire category theorem gives sufficient conditions for a topological space to be a Baire space. (BCT1) Every complete pseudometric space is a Baire space. In particular, every completely metrizable topological space is a Baire space. (BCT2) Every locally compact regular space is a Baire space. In particular, every locally compact Hausdorff space is a Baire space. BCT1 shows that the following are Baire spaces: The space of real numbers. The space of irrational numbers, which is homeomorphic to the Baire space of set theory. Every Polish space. BCT2 shows that the following are Baire spaces: Every compact Hausdorff space; for example, the Cantor set (or Cantor space). Every manifold, even if it is not paracompact (hence not metrizable), like the long line. One should note however that there are plenty of spaces that are Baire spaces without satisfying the conditions of the Baire category theorem, as shown in the Examples section below. Properties Every nonempty Baire space is nonmeagre. In terms of countable intersections of dense open sets, being a Baire space is equivalent to such intersections being dense, while being a nonmeagre space is equivalent to the weaker condition that such intersections are nonempty. Every open subspace of a Baire space is a Baire space. Every dense Gδ set in a Baire space is a Baire space. The result need not hold if the Gδ set is not dense. See the Examples section. Every comeagre set in a Baire space is a Baire space. A subset of a Baire space is comeagre if and only if it contains a dense Gδ set. A closed subspace of a Baire space need not be Baire. See the Examples section. If a space contains a dense subspace that is Baire, it is also a Baire space. A space that is locally Baire, in the sense that each point has a neighborhood that is a Baire space, is a Baire space. Every topological sum of Baire spaces is Baire. The product of two Baire spaces is not necessarily Baire. An arbitrary product of complete metric spaces is Baire. Every locally compact sober space is a Baire space. Every finite topological space is a Baire space (because a finite space has only finitely many open sets and the intersection of two open dense sets is an open dense set). A topological vector space is a Baire space if and only if it is nonmeagre, which happens if and only if every closed balanced absorbing subset has non-empty interior. Let be a sequence of continuous functions with pointwise limit If is a Baire space, then the points where is not continuous is in and the set of points where is continuous is dense in A special case of this is the uniform boundedness principle. Examples The empty space is a Baire space. It is the only space that is both Baire and meagre. The space of real numbers with the usual topology is a Baire space. The space of rational numbers (with the topology induced from ) is not a Baire space, since it is meagre. The space of irrational numbers (with the topology induced from ) is a Baire space, since it is comeagre in The space (with the topology induced from ) is nonmeagre, but not Baire. There are several ways to see it is not Baire: for example because the subset is comeagre but not dense; or because the nonempty subset is open and meagre. Similarly, the space is not Baire. It is nonmeagre since is an isolated point. The following are examples of Baire spaces for which the Baire category theorem does not apply, because these spaces are not locally compact and not completely metrizable: The Sorgenfrey line. The Sorgenfrey plane. The Niemytzki plane. The subspace of consisting of the open upper half plane together with the rationals on the -axis, namely, is a Baire space, because the open upper half plane is dense in and completely metrizable, hence Baire. The space is not locally compact and not completely metrizable. The set is closed in , but is not a Baire space. Since in a metric space closed sets are Gδ sets, this also shows that in general Gδ sets in a Baire space need not be Baire. Algebraic varieties with the Zariski topology are Baire spaces. An example is the affine space consisting of the set of -tuples of complex numbers, together with the topology whose closed sets are the vanishing sets of polynomials See also Notes References External links Encyclopaedia of Mathematics article on Baire space Encyclopaedia of Mathematics article on Baire theorem General topology Functional analysis Properties of topological spaces
Baire space
Mathematics
1,380
27,636,179
https://en.wikipedia.org/wiki/Encyclop%C3%A6dia%20Britannica%20Ultimate%20Reference%20Suite
Encyclopædia Britannica Ultimate Reference Suite is an encyclopaedia based on the Encyclopædia Britannica and published by Encyclopædia Britannica, Inc. It was published between 2003 and 2015. Product description The DVD contains over 100,000 articles, an atlas, around 35,000 media files (images, video and audio) and a dictionary and thesaurus based on Merriam-Webster. Awards Encyclopædia Britannica Ultimate Reference Suite received the 2004 Distinguished Achievement Award from the Association of Educational Publishers. Its predecessor, Britannica DVD, received Codie awards in 2000, 2001 and 2002. Linux support There is no official release of Britannica for the Linux operating system; however, a script is provided that can help experienced users run Encyclopædia Britannica 2004 Ultimate Reference Suite DVD (and other 2004 editions of Britannica) on Linux, with some limitations (for example the dictionary, Flash/QuickTime presentations, and content update functions do not work, and preferences must be edited manually). This script specifically requires version 1.3.1 of JRE, but can usually be made to work with newer versions if the version check is commented out. Minimum system requirements The 2012 edition states the following system requirements: See also Encyclopædia Britannica Encyclopædia Britannica Online References External links English-language encyclopedias British encyclopedias American encyclopedias Ultimate Reference Suite Educational software for macOS Educational software for Windows 21st-century encyclopedias Multimedia
Encyclopædia Britannica Ultimate Reference Suite
Technology
326
23,720,698
https://en.wikipedia.org/wiki/General%20Electric%20LM500
The General Electric LM500 is an industrial and marine gas turbine produced by GE Aviation. The LM500 is a derivative of the General Electric TF34 aircraft engine. Current versions of the LM500 deliver 6,000 shaft horsepower (4.47 MW) with a thermal efficiency of 31 percent at ISO conditions. It has been used in various applications such as in the Royal Danish Navy's Flyvefisken class patrol vessels, and in fast ferries. Applications Naval Denmark Flyvefisken-class patrol vessel Japan Hayabusa-class patrol boat Izumo-class helicopter destroyer 1-go-class patrol boat South Korea Gumdoksuri-class patrol vessel Commercial TurboJET FoilCat Industrial Pipeline transport Tennessee Gas Pipeline Kinder Morgan ExxonMobil Australia State Energy Commission of Western Australia. AlintaGas Nova - now TC Energy Research Railgun University of Texas - Center for Electromechanics - CEM See also References External links GE LM500 website Aero-derivative engines Marine engines Gas turbines
General Electric LM500
Technology
209
7,047,611
https://en.wikipedia.org/wiki/Clyde%20Arc
The Clyde Arc (known locally as the Squinty Bridge) is a road bridge spanning the River Clyde in Glasgow, Scotland, connecting Finnieston near the SEC Armadillo and SEC with Pacific Quay and Glasgow Science Centre in Govan. Prominent features of the bridge are its innovative curved design, and that it crosses the river at an angle. The Arc is the first city centre traffic crossing over the river built since the Kingston Bridge was opened to traffic in 1970. The bridge was named the "Clyde Arc" upon its official opening on 18 September 2006. It had been previously known as the "Finnieston Bridge", or the "Squinty Bridge". Design The Clyde Arc was designed by Halcrow Group and built by BAM Nuttall. Glasgow City Council instigated the project in conjunction with Scottish Enterprise and the Scottish Government. Piling works for the bridge were carried out from a large floating barge on the Clyde, whilst the bridge superstructure was fabricated offsite. The bridge-deck concrete-slab units were cast at an onsite pre-casting yard. Planning permission was granted in 2003 and construction of the bridge began in May 2005. It was structurally completed in April 2006. The bridge project cost an estimated £20.3M and is designed to last 120 years. The bridge has a main span of with two end spans of , resulting in a total span of . The design of the main span features a steel arch. The supports for the main span are located within the river with the abutments located behind the existing quay walls. The central navigation height at mean water height is . It was officially opened on 18 September 2006 by Glasgow City Council leader Steven Purcell, although pedestrians were allowed to walk across it the previous two days as part of Glasgow's annual "Doors Open" Weekend. The bridge connects Finnieston Street on the north bank of the river to Govan Road on the southern bank. The bridge takes four lanes of traffic, two of which are dedicated to public transport and two for private and commercial traffic. There are also pedestrian and cycle paths. The new bridge was built to provide better access to Pacific Quay and allow better access to regeneration areas on both banks of the Clyde. The bridge has been designed to cope with a possible light rapid transit system (light railway scheme) or even a tram system. The bridge is the first part of several development projects planned to regenerate Glasgow. The £40M Tradeston Bridge was also completed (a further proposed pedestrian bridge linking Springfield Quay with Lancefield Quay was not). The canting basin and Govan Graving Docks next to Pacific Quay are subject to development along with Tradeston and Laurieston. A derelict area of Dalmarnock was used as the 'athletes' village' for the 2014 Commonwealth Games in Glasgow. Support hanger failure The bridge was closed between 14 January and 28 June 2008 due to the failure of one support hanger, and cracks found in a second. On the night of 14 January 2008 the connecting fork on one of the bridge's 14 hangers (supporting cables that transfer the weight of the roadway to the bridge's arch) snapped; Strathclyde Police quickly closed the bridge to traffic. Robert Booth, a spokesman for Glasgow City Council said: A detailed inspection on 24 January found a stress fracture in a second support cable stay, like the one which had failed previously. Engineers determined that all of these connectors would have to be replaced; rather than a brief closure the bridge would have to remain closed for six months. In addition traffic on the river below was also halted. In March Nuttall began installing five temporary saddle frames atop the bridge's arch; these allowed the weight of the bridge to be supported without the hangers. This allowed them to replace defective fork connectors at the top and bottom of each hanger. The bridge recommenced on 28 June 2008 with just two of its four lanes in use, having had all the cast steel connectors replaced with milled steel connectors. Once reopened, Glasgow City Council estimated that 6,500 crossings will be made every day using the bridge. New Civil Engineer reported subcontractor Watson Steel Structures was suing Macalloy, the supplier of the failed connectors, for £1.8 million. Watson alleged components obtained from Macalloy did not meet British Standards or their own specifications; parts were inadequately manufactured, and did not tally with test certificates provided by the firm. Macalloy denied the claim and countered Watson Steel Structures Ltd had only specified minimum yield stress for the components. See also Hulme Arch Bridge References External links Photographs taken at the opening ceremony Clyde Arc - Clyde Waterfront project details Road bridges in Scotland Bridges in Glasgow Bridges across the River Clyde Through arch bridges in the United Kingdom Pedestrian bridges in Scotland Bridges completed in 2006 Engineering failures Govan 2006 establishments in Scotland
Clyde Arc
Technology,Engineering
986
42,214,895
https://en.wikipedia.org/wiki/Communal%20meal
A communal meal is a meal eaten by a group of people. Also referred to as communal dining, the practice is centered on food and sharing time with the people who come together in order to share the meal and conversation. Communal dining can take place in public establishments like restaurants, college cafeterias, or in private establishments (home). It often but not always serves a social, symbolic and/or ceremonial purpose. For some, the act of eating communally defines humans as compared to other species. Communal meals have long been of interest to both archeologists and anthropologists. Much scholarly work about communal eating has focused on special occasions but everyday practices of eating together with friends, family or colleagues is also a form of communal eating. Communal eating is closely bound up with commensality (the sociological concept of eating with other people). Communal eating is also bound up with eating and drinking together to cement relations, to establish boundaries and hierarchies as well as for pleasure. Some examples of communal meals are the Native American potlatch, the Thanksgiving meal, cocktail parties, and company picnics. Meals shared for religious traditions include the Christian Agape feast, Muslim iftar, and Jewish Passover Seder. Some restaurants feature communal meals at large tables where diners are seated next to strangers and are encouraged to interact with neighbors. Communal dining was an important part of ancient Rome's religious traditions. There is a mention of communal dining in Chinese history. See also Refectory References Eating parties Communal eating Food and drink culture Eating behaviors of humans Restaurant design
Communal meal
Biology
317
9,234,631
https://en.wikipedia.org/wiki/Motorola%20Minitor
The Motorola Minitor is a portable, analog, receive only, voice pager typically carried by civil defense organizations such as fire, rescue, and EMS personnel (both volunteer and career) to alert of emergencies. The Minitor, slightly smaller than a pack of cigarettes, is carried on a person and usually left in selective call mode. When the unit is activated, the pager sounds a tone alert, followed by an announcement from a dispatcher alerting the user of a situation. After activation, the pager remains in monitor mode much like a scanner, and monitors transmissions on that channel until the unit is reset back into selective call mode either manually, or automatically after a set period of time, depending on programming. Purpose and History In the times before modern radio communications, it was difficult for emergency services such as volunteer fire departments to alert their members to an emergency, since the members were not based at the station. The earliest methods of sounding an alarm would typically be by ringing a bell either at the fire station or the local church. As electricity became available, most fire departments used fire sirens or whistles to summon volunteers (many fire departments still use outdoor sirens and horns along with pagers to alert volunteers). Other methods included specialized phones placed inside the volunteer firefighter's home or business or by base radios or scanners. "Plectron" radio receivers were very popular, but were limited to 120VAC or 12VDC operation, limiting their use to a house/building or mounted in a vehicle. There was a great need and desire for a portable radio small enough to be worn by a person and only activated when needed. Thus, Motorola answered this call in the 1970s and released the very first Minitor pager. There are six versions of Minitor pagers. The first was the original Minitor, followed by the Minitor II(1992), Minitor III(1999), Minitor IV, and the Minitor V released in late 2005. The Minitor VI was released in early 2014. The Minitor III, IV, and V used the same basic design, while the original Minitor and Minitor II use their own rectangular proprietary case design. Similar voice pagers released by Motorola were the Keynote and Director pagers. They were essentially stripped down versions of the Minitor and never gained widespread use, though the Keynotes were much more common in Europe because they could decode 5/6 tone alert patterns in addition to the more popular two tone sequential used in the United States. Although the Minitor is primarily used by emergency personnel, other agencies such as utilities and private contractors also use the pager. Unlike conventional alphanumeric pagers and cell phones, Minitors are operated on an RF network that is generally restricted to a particular agency in a given geographical area. The Minitor is the most common voice pager used by emergency services in the United States. However, digital 2-way pagers that can display alpha-numeric characters can overcome some of the limitations of voice only pagers, are now starting to replace the Minitor pagers in certain applications. Activation Minitor pagers, depending on the model and application, can operate in the VHF Low Band, VHF High Band, and UHF frequency ranges. They are alerted by using two-tone sequential Selective calling, generally following the Motorola Quick Call II standard. In other words, the pager will activate when a particular series of audible tones are sent over the frequency (commonly referred to as a "page") the pager is set to. For example, if a Minitor is programmed on VHF frequency channel 155.295 MHz and set to alert for 879 Hz & 358.6 Hz, it will disregard any other tone sequences transmitted on that frequency, only alerting when the proper sequence has been received. The pager may be reset back into its selective call mode by pressing the reset button, or it can be programmed to reset back into selective call mode automatically after a predetermined amount of time, to conserve battery power. Older Minitor pagers (both the Minitor I and Minitor II series) have tone reeds or filters that are tuned to a specific audible tone frequency, and must physically be replaced if alert tones are changed. For two-tone sequential paging, there are two reeds, the first tone passes through the first reed, and the second tone passes through the second reed, thereby activating the pager. Beginning with the Minitor III series, these physical reeds or filters are no longer necessary, as the pagers now feature all solid-state electronics, and various tone sequences can be programmed via computer software. Newer Minitor pagers can scan two channels by selecting that function via a rotary knob on the pager; in this mode when using a Minitor III or IV the user will hear all traffic, even without the correct tones being sent. If the activation tones are transmitted in this monitor mode, the pager alerts as normal. Minitor Vs have the option to remain in monitor mode or in selective call mode when scanning two channels. Minitor IIIs and IVs only have the option to remain in monitor mode when scanning two channels. The range of the Minitor's operating distance depends on the strength ("wattage") of the paging transmitter. A repeater is often used to improve paging coverage, as it can be located for better range than the dispatch center where the page originates from. Weather conditions, low battery, and even atmospheric conditions can affect the Minitor's ability to receive transmissions. In fact, a remote transmitter hundreds, even thousands of miles away belonging to a separate agency, can activate a Minitor (and also block it) unknowingly if the atmospheric conditions let the signal propagate that far. This is commonly known as radio skip. The Minitor is a receive-only unit, not a transceiver, and thus cannot transmit. Features Note - most all of the features below refer to the Minitor pagers III and up, the original Minitor and Minitor II pagers may not have some of the listed features Newer generation Minitor pagers can simultaneously scan up to two channels and have multiple activation tones. This can be very helpful if a user belongs to several emergency services, or the emergency service has different alarms for different emergencies. Alert tones - The default, and most common alert is the continuous beeping (sounds like "beep-beep-beep-beep...etc.)". Other alarms can include a steady high pitched tones, and the newest Minitor V's can even have musical tones for general non-emergency announcements. VIBRA-Page - For silent alarm activation, most Minitor pagers can also vibrate without sounding an alarm tone. This is particularly useful in churches, schools, meetings, etc. where a loud noise would be disruptive. This feature is known as "VIBRA-Page". Voice Record - Many Minitor pagers can also record (up to 8 minutes, depending on the model and options) of voice/transmission after the pager activates. Controls - Physical controls (specifically on the Minitor III) include an "A, B, C, D" function knob, a power/volume knob, reset button, voice playback button, external speaker jack, and an amber and red LED. Depending on the model, the selection on the function knobs may do different things. Control examples - For example, function A may be selective call mode, while function B is the vibrate function. Function C monitors channel 2. D is the mode that is similar to a scanner. When the pager is turned on, eight short beeps are heard along with flashing of both LEDs. Holding down the reset button in selective call mode will monitor the channel for any transmission on that channel at that time, or pure static as the squelch is bypassed. Field Programmable - Some models have field programmable options such as Non- Priority Scan, Alert Duration, Priority Alert, On/Off Duty, Reset Options, and Push-To-Listen. Many Minitor pagers can be hooked up to a computer with a special cable and options changed. Durability - Unlike older models, the Minitor V is "rainproof" as it meets "Military Standard 810, Procedure 1 for driving rain". Belt Clip - A spring-loaded clip is attached to the back of each Minitor to allow the user to clip the pager onto a pocket or belt. Also, carrying cases and covers are also made to protect the pager. Charging - Minitor pagers come standard with a charging stand and two rechargeable batteries. Amplified base unit - An optional "Charger/Amplifier" base can be bought. Bigger than the standard charging stand, the "Charger/Amplifier" base not only charges the pager, but has an external antenna for increased reception, an amplified audio out jack to drive a stand-alone speaker, and some models even incorporate a relay to activate external devices along with the pager. Some uses for this relay include: Turning on lights in a building such as a fire station, activating an external audio/visual alarm, etc. Accessories - Official Motorola accessories for the Minitor pagers include (including some listed above): Desktop Battery Charger, Desktop Battery Charger/Amplifier with Antenna and Relay, Vehicular Charger-Amp with Relay, Earpieces, Extra Loud Lapel Speaker, and Nylon Carrying Case. Disadvantages The audible alarm on the Minitor lasts only for the duration of the second activation tone. If there is bad reception, the pager may only sound a quick beep, and the user may not be alerted properly. This can be changed by editing the codeplug's "Alert Duration" from STD to Fixed, the user can then set the alert duration longer than the second tone. The user must be cautious, however, as setting the alert tone duration too high may cover some voice information. Also, some units may have the volume knob set to control the sound output of the audible alert as well. The user may have the volume turned down to an undetectable level either by accident or by carelessness, thus missing the page. A factory option for "Fixed Alert" (the only option on the earlier Minitor I), however, lets the alert tone override the volume and sound at maximum volume regardless of the volume knob's position. It is possible to program the pager to always vibrate when an alert is received, giving the possibility of either a silent (vibrating) alert or audible and vibrating alerts. Minitor I and II do not have vibrating capabilities standard). The vibrating motor in the newer (IV and V) Minitor pagers is quite strong in order to be felt in varying conditions, such as when performing heavy work. It is not uncommon for the vibrating motor in a pager, placed in a charger overnight and left in vibrate mode, to "walk" the pager and charger off of a table or nightstand. Minitor pagers are powered by battery which will eventually run down if not charged (a flashing red LED and audible alarm is used as a warning of low battery power). As the Minitor is portable, its electronics aren't as sensitive as set top or base radios and are usually less able to pick up weak or distant signals. See also Selective calling Radio receiver Plectron Dispatching References External links Motorola MINITOR Information on Batlabs Firefighting equipment Pagers
Motorola Minitor
Technology
2,359
2,211,700
https://en.wikipedia.org/wiki/Closed-loop%20authentication
Closed-loop authentication, as applied to computer network communication, refers to a mechanism whereby one party verifies the purported identity of another party by requiring them to supply a copy of a token transmitted to the canonical or trusted point of contact for that identity. It is also sometimes used to refer to a system of mutual authentication whereby two parties authenticate one another by signing and passing back and forth a cryptographically signed nonce, each party demonstrating to the other that they control the secret key used to certify their identity. E-mail Authentication Closed-loop email authentication is useful for simple i another, as a weak form of identity verification. It is not a strong form of authentication in the face of host- or network-based attacks (where an imposter, Chuck, is able to intercept Bob's email, intercepting the nonce and thus masquerading as Bob.) A use of closed-loop email authentication is used by parties with a shared secret relationship (for example, a website and someone with a password to an account on that website), where one party has lost or forgotten the secret and needs to be reminded. The party still holding the secret sends it to the other party at a trusted point of contact. The most common instance of this usage is the "lost password" feature of many websites, where an untrusted party may request that a copy of an account's password be sent by email, but only to the email address already associated with that account. A problem associated with this variation is the tendency of a naïve or inexperienced user to click on a URL if an email encourages them to do so. Most website authentication systems mitigate this by permitting unauthenticated password reminders or resets only by email to the account holder, but never allowing a user who does not possess a password to log in or specify a new one. In some instances in web authentication, closed-loop authentication is employed before any access is granted to an identified user that would not be granted to an anonymous user. This may be because the nature of the relationship between the user and the website is one that holds some long-term value for one or both parties (enough to justify the increased effort and decreased reliability of the registration process.) It is also used in some cases by websites attempting to impede programmatic registration as a prelude to spamming or other abusive activities. Closed-loop authentication (like other types) is an attempt to establish identity. It is not, however, incompatible with anonymity, if combined with a pseudonymity system in which the authenticated party has adequate confidence. See also See :Category:Computer security for a list of all computing and information-security related articles. Information Security Authentication Cryptography References Computer access control
Closed-loop authentication
Engineering
562
55,570,413
https://en.wikipedia.org/wiki/Marije%20Vogelzang
Marije Vogelzang is a Dutch "Food Designer" or an "Eating Designer", to be expressed more specialized. Her name is considered among the pioneers of the field of Food Design, along with names such as Marti Guixe and Francesca Zampollo. In the late 1990s, Marije presented her "White Funeral Meal" project at the Design Academy Eindhoven, while food was not yet considered as a material for design. This project was subsequently shown at Slaone del Mobile in Milan and was featured in all kind of magazines. During her 25 years of professional work in the field of Food Design, Marije has played a prominent role in the development of this field, both in theory and in practice. She considers the description of design as "giving shape to an idea ("Gestaulting")", focusing on how to use food for designing meaningful eating experiences that goes beyond food. Vogelzang's significant works include Feed Love and Eat Love Budapest. They are multimedia installations that combine elements of sculpture, performance art, and interactive technology, based on the concept of feeding. Life and career In 2000, Vogelzang graduated from Design Academy Eindhoven and started working independently with creative catering. In 2004, Vogelzang met Piet Hekker who was proposed to start a restaurant/Food Design studio called Proef (which means both "taste" and "test" in Dutch) in Rotterdam. Within a year this meeting caused these two people to start working together. The collaboration was the opening of the Second Proef in Amsterdam for experimental dinner concepts. In 2008, Vogelzang published her first book called Eat Love: Food Concepts by Eating-Designer Marije Vogelzang. The same year, she had her first solo exhibition at Axis Gallery in Tokyo. In 2011, instead of focusing on her restaurants, she started to spend more time developing her design practice. This was when she created projects such as Eat Love Budapest which was in the form of installation/performance and is among her most significant works. This year was when Studio Marije Vogelzang was born. In 2014, She became the head of the "New Food Non Food" department at Design Academy Eindhoven. In 2015, she founded the Dutch Institute of Food and Design as a global platform for designers working with food. In 2019, Vogelzang created "Food and Design Dive", a live online course. Her other courses include "Creative Strategies for Sensitive Pirates", "Advanced Dive", and "Summer School". Bibliography In 2009, Marije published her first book Eat Love: Food Concepts by Eating-Designer Marije Vogelzang This book, which is among the first books written in the field of food design, includes eight different chapters: Psychology, Culture, Senses, Nature, Action, Science, Technique, and Society. In 2022, she published another book Lick It: Challenge the Way You Experience Food, describing her experience in the field of food design theory and practice. Significant Projects Eat Love Budapest. A 4-day performance where Roma Women have fed over 400 visitors while telling their life stories. During this performance held in a white space, 10 separate spaces is created. Each of the spaces is divided into two parts by a white textile: one part is a strange room where a Hungarian participant sits while cannot look outside through the textile. In this project Vogelzang considers "feeding" as a universal language and explores the relationship between people and food. Volumes. This project focused on the design of eating devices which help eaters think their plates are fuller than they are to reduce overeating. References External links Dutch designers Academic staff of Design Academy Eindhoven Eating behaviors of humans Living people Year of birth missing (living people)
Marije Vogelzang
Biology
762
31,160,956
https://en.wikipedia.org/wiki/Hydroxylated%20lecithin
Hydroxylated lecithin is chemically modified lecithin. It is made by treating lecithin with hydrogen peroxide and an organic acid such as acetic or lactic acid. In the process, some of the organic acid becomes peroxy acid. The peroxy acid reacts with olefins in the fatty acid side chains creating intermediate epoxides. The epoxides react further with water, organic acid, or peroxy acid, to ultimately form vicinal diols. Because the natural fatty acid olefins have (Z)-configurations, the resulting vicinal diols have anti stereochemical configurations. Fatty acids with hydroxyl groups on their hydrophobic tails are rare in nature. Compare hydroxylated lecithin to castor oil, which has 3 hydroxylated fatty acid chains in it. Hydroxyl groups give these oils unique polar properties that make them useful in a variety of applications, including cosmetics, pharmaceuticals, and foods. Synthesis References Phospholipids
Hydroxylated lecithin
Chemistry
208
54,044,650
https://en.wikipedia.org/wiki/Chandrasekhar%27s%20variational%20principle
In astrophysics, Chandrasekhar's variational principle provides the stability criterion for a static barotropic star, subjected to radial perturbation, named after the Indian American astrophysicist Subrahmanyan Chandrasekhar. Statement A baratropic star with and is stable if the quantity is non-negative for all real functions that conserve the total mass of the star . where is the coordinate system fixed to the center of the star is the radius of the star is the volume of the star is the unperturbed density is the small perturbed density such that in the perturbed state, the total density is is the self-gravitating potential from Newton's law of gravity is the Gravitational constant References Variational principles Stellar dynamics Astrophysics Fluid dynamics Equations of astronomy
Chandrasekhar's variational principle
Physics,Chemistry,Astronomy,Mathematics,Engineering
166
7,481,905
https://en.wikipedia.org/wiki/Methionine%20%28data%20page%29
References Chemical data pages Chemical data pages cleanup
Methionine (data page)
Chemistry
10
961,381
https://en.wikipedia.org/wiki/Arcade%20cabinet
An arcade cabinet, also known as an arcade machine or a coin-op cabinet or coin-op machine, is the housing within which an arcade game's electronic hardware resides. Most cabinets designed since the mid-1980s conform to the Japanese Amusement Machine Manufacturers Association (JAMMA) wiring standard. Some include additional connectors for features not included in the standard. Parts of an arcade cabinet Because arcade cabinets vary according to the games they were built for or contain, they may not possess all of the parts listed below: A display output, on which the game is displayed. They may display either raster or vector graphics, raster being most common. Standard resolution is between 262.5 and 315 vertical lines, depending on the refresh rate (usually between 50 and 60 Hz). Slower refresh rates allow for better vertical resolution. Monitors may be oriented horizontally or vertically, depending on the game. Some games use more than one monitor. Some newer cabinets have monitors that can display high-definition video. An audio output for sound effects and music, usually produced from a sound chip. Printed circuit boards (PCB) or arcade system boards, the actual hardware upon which the game runs. Hidden within the cabinet. Some systems, such as the SNK Neo-Geo MVS, use a mainboard with game carts. Some mainboards may hold multiple game carts as well. A power supply to provide DC power to the arcade system boards and low voltage lighting for the coin slots and lighted buttons. A marquee, a sign above the monitor displaying the game's title. They are often brightly colored and backlit. A bezel, which is the border around the monitor. It may contain instructions or artwork. A control panel, a level surface near the monitor, upon which the game's controls are arranged. Control panels sometimes have playing instructions. Players often pile their coins or tokens on the control panels of upright and cocktail cabinets. Coin slots, coin returns and the coin box, which allow for the exchange of money or tokens. They are usually below the control panel. Very often, translucent red plastic buttons are placed in between the coin return and the coin slot. When they are pressed, a coin or token that has become jammed in the coin mechanism is returned to the player. See coin acceptor. In some arcades, the coin slot is replaced with a card reader that reads data from a game card bought from the arcade operator. The sides of the arcade cabinet are usually decorated with brightly colored stickers or paint, representing the gameplay of their particular game. Types of cabinets There are many types of arcade cabinets, some being custom-made for a particular game; however, the most common are the upright, the cocktail or table, and the sit-down. Upright cabinets Upright cabinets are the most common in North America, with their design heavily influenced by Computer Space and Pong. While the futuristic look of Computer Space outer fiberglass cabinet did not carry forward, both games did establish separating parts of the arcade machine for the cathode-ray tube (CRT) display, the game controllers, and the computer logic areas. Atari also had placed the controls at a height suitable for most adult players to use, but close enough to the console's base to also allow children to play. Further, the cabinets were more compact than traditional electro-mechanical games and did not use flashing lights or other means to attract players. The side panels of Atari's Pong had a simple wood veneer finish, making it easier to market to non-arcade venues, such as hotels, country clubs, and cocktail bars. In the face of growing competition, Atari started to include cabinet art and attraction panels around 1973–1974, which soon became a standard practice. Arcade cabinets today are usually made of wood and metal, about six feet or two meters tall, with the control panel set perpendicular to the monitor at slightly above waist level. The monitor is housed inside the cabinet, at approximately eye level. The marquee is above it, and often overhangs it. In Computer Space, Pong and other early arcade games, the CRT was mounted 90 degrees from the ground, facing directly outward. Arcade game manufacturers began incorporating design principles from older electro-mechanical games by using CRTs mounted at a 45-degree angle, facing upward and away from the player but towards a one-way mirror that reflected the display to the player. Additional transparent overlays could be added between the mirror and the player's view to include additional images and colorize the black-and-white CRT output, as is the case in Boot Hill. Other games, like Warrior, used a one-sided mirror and included an illuminated background behind the mirror, so that the on-screen characters would appear to the players as if they were on that background. With the advent of color CRT displays, the need for the mirror was eliminated. The CRT was subsequently positioned at an angle permitting a typical adult player to look directly at the screen. Controls are most commonly a joystick for as many players as the game allows, plus action buttons and "player" buttons which serve the same purpose as the start button on console gamepads. Trackballs are sometimes used instead of joysticks, especially in games from the early 1980s. Spinners (knobs for turning, also called "paddle controls") are used to control game elements that move strictly horizontally or vertically, such as the paddles in Arkanoid and Pong. Games such as Robotron: 2084, Smash TV and Battlezone use double joysticks instead of action buttons. Some versions of the original Street Fighter had pressure-sensitive rubber pads instead of buttons. If an upright is housing a driving game, it may have a steering wheel and throttle pedal instead of a joystick and buttons. If the upright is housing a shooting game, it may have light guns attached to the front of the machine, via durable cables. Some arcade machines had the monitor placed at the bottom of the cabinet with a mirror mounted at around 45 degrees above the screen facing the player. This was done to save space, as a large CRT monitor would otherwise poke out the back of the cabinet. To correct for the mirrored image, some games had an option to flip the video output using a dip switch setting. Other genres of games such as Guitar Freaks feature controllers resembling musical instruments. Upright cabinet shape designs vary from the simplest symmetric perpendicular boxes as with Star Trek to complicated asymmetric forms. Games are typically for one or two players; however, games such as Gauntlet feature as many as four sets of controls. Sit-down or table cabinets Cocktail cabinets Cocktail cabinets are shaped like low, rectangular tables, with the controls usually set at either of the broad ends, or, though not as common, at the narrow ends, and the monitor inside the table, the screen facing upward. Two-player games housed in cocktails were usually alternant, each player taking turns. The monitor reverses its orientation (game software controlled) for each player, so the game display is properly oriented for each player. This requires special programming of the cocktail versions of the game (usually set by dip switches). The monitor's orientation is usually in player two's favor only in two-player games when it is player two's turn and in player one's favor all other times. Simultaneous, 4 player games that are built as a cocktail include Warlords, and others. In Japan, many games manufactured by Taito from the 1970s to the early 1980s have the cocktail versions prefixed by "T.T" in their titles (eg. T.T Space Invaders). Cocktail cabinet versions were usually released alongside the upright version of the same game. They were relatively common in the 1980s, especially during the Golden Age of Arcade Games, but have since lost popularity. Their main advantage over upright cabinets was their smaller size, making them seem less obtrusive, although requiring more floor space (more so by having players seated at each end). The top of the table was covered with a piece of tempered glass, making it convenient to set drinks on (hence the name), and they were often seen in bars and pubs. Candy cabinets Owing to the resemblance of plastic to hard candy, they are often known as "candy cabinets", by both arcade enthusiasts and by people in the industry. They are also generally easier to clean and move than upright cabinets, but usually just as heavy as most have 29" screens, as opposed to 20"–25". They are positioned so that the player can sit down on a chair or stool and play for extended periods. SNK sold many Neo-Geo MVS cabinets in this configuration, though most arcade games made in Japan that only use a joystick and buttons will come in a sit-down cabinet variety. In Japanese arcades, this type of cabinet is generally more prevalent than the upright kind, and they are usually lined up in uniform-looking rows. A variant of this, often referred to as "versus-style" cabinets are designed to look like two cabinets facing each other, with two monitors and separate controls allowing two players to fight each other without having to share the same monitor and control area. Some newer cabinets can emulate these "versus-style" cabinets through networking. Deluxe cabinets Deluxe cabinets (also known as DX cabinets in Japan) are most commonly used for games involving gambling, long stints of gaming (such as fighting games), or vehicles (such as flight simulators and racing games). These cabinets typically have equipment resembling the controls of a vehicle (though some of them are merely large cabinets with fair features such as a great screen or chairs). Driving games may have a bucket seat, foot pedals, a stick shift, and even an ignition, while flight simulators may have a flight yoke or joystick, and motorcycle games handlebars, and a seat shaped like a full-size bike. Often, these cabinets are arranged side-by-side, to allow players to compete together. Sega is one of the biggest manufacturers of these kinds of cabinets, while Namco released Ridge Racer Full Scale, in which the player sits in a full-size Mazda MX-5 road car. Cockpit or environmental cabinets A cockpit or environmental cabinet is a type of deluxe cabinet where the player sits inside the cabinet itself. It also typically has an enclosure. Examples of this can be seen on the Killer List of Videogames, including shooter games such as Star Fire, Missile Command, SubRoc-3D, Star Wars, Astron Belt, Sinistar and Discs of Tron as well as racing games such as Monaco GP, Turbo and Pole Position. A number of cockpit/or environmental cabinets incorporate hydraulic motion simulation, as covered in the section below. Motion simulator cabinets A motion simulator cabinet is a type of deluxe cabinet that is very elaborate, including hydraulics which move the player according to the action on screen. In Japan, they are known as "taikan" games, with "taikan" meaning "body sensation" in Japanese. Sega is particularly known for these kinds of cabinets, with various types of sit-down and cockpit motion cabinets that Sega have been manufacturing since the 1980s. Namco was another major manufacturer of motion simulator cabinets. Motorbike racing games since Sega's Hang-On have had the player sit on and move a motorbike replica to control the in-game actions (like a motion controller). Driving games since Sega's Out Run have had hydraulic motion simulator sit-down cabinets, while hydraulic motion simulator cockpit cabinets have been used for space combat games such as Sega's Space Tactics (1981) and Galaxy Force, rail shooters such as Space Harrier and Thunder Blade, and combat flight simulators such as After Burner and G-LOC: Air Battle. One of the most sophisticated motion simulator cabinets is Sega's R360, which simulates the full 360-degree rotation of an aircraft. Mini or cabaret cabinets Mini or cabaret cabinets are similar forms of arcade cabinet but are intended for different markets. Modern mini cabinets are sold directly to consumers and are not intended for commercial operation. They are styled just like a standard upright cabinet, often with full art and marquees, but are scaled down to more easily fit in a home environment or be used by children. The older form of mini or cabaret cabinets were marketed for commercial use and are no longer made. They were often thinner as well as shorter, lacked side art, and had smaller marquees and monitors. This reduced their cost, reduced their weight, made them better suited to locations with less space, and also made them less conspicuous in darker environments. In place of side art they were often clad in faux wood grain vinyl instead. Countertop cabinets Countertop or bartop cabinets are usually only large enough to house their monitors and control panels. They are often used for trivia and gambling-type games and are usually found installed on bars or tables in pubs and restaurants. These cabinets often have touchscreen controls instead of traditional push-button controls. They are also fairly popular with home use, as they can be placed upon a table or countertop. Large-scale satellite machines Usually found in Japan, these machines have multiple screens interconnected to one system, sometimes with one big screen in the middle. These also often feature the dispensation of different types of cards, either a smartcard in order to save stats and progress or trading cards used in the game. Conversion kit An arcade conversion kit, also known as a software kit, is special equipment that can be installed into an arcade machine that changes the current game it plays into another one. For example, a conversion kit can be used to reconfigure an arcade machine designed to play one game so that it would play its sequel or update instead, such as from Street Fighter II: Champion Edition to Street Fighter II Turbo. Restoration Since arcade games are becoming increasingly popular as collectibles, an entire niche industry has sprung up focused on arcade cabinet restoration. There are many websites (both commercial and hobbyist) and newsgroups devoted to arcade cabinet restoration. They are full of tips and advice on restoring games to mint condition. Artwork Often game cabinets were used to host a variety of games. Often after the cabinet's initial game was removed and replaced with another, the cabinet's side art was painted over (usually black) so that the cabinet would not misrepresent the game contained within. The side art was also painted over to hide damaged or faded artwork. Of course, hobbyists prefer cabinets with original artwork in the best possible condition. Since machines with good quality art are hard to find, one of the first tasks is stripping any old artwork or paint from the cabinet. This is done with conventional chemical paint strippers or by sanding (preferences vary). Normally artwork cannot be preserved that has been painted over and is removed with any covering paint. New paint can be applied in any manner preferred (roller, brush, spray). Paint used is often just conventional paint with a finish matching the cabinet's original paint. Many games had artwork that was silkscreened directly on the cabinets. Others used large decals for the side art. Some manufacturers produce replication artwork for popular classic games—each varying in quality. This side art can be applied over the new paint after it has dried. These appliques can be very large and must be carefully applied to avoid bubbles or wrinkles from developing. Spraying the surface with a slightly soapy water solution allows the artwork to be quickly repositioned if wrinkles or bubbles develop like in window tinting applications. Control panels, bezels, marquees Acquiring these pieces is harder than installing them. Many hobbyists trade these items via newsgroups or sites such as eBay (the same is true for side art). As with side art, some replication art shops also produce replication artwork for these pieces that is indistinguishable from the original. Some even surpass the originals in quality. Once these pieces are acquired, they usually snap right into place. If the controls are worn and need replacing, if the game is popular, they can be easily obtained. Rarer game controls are harder to come by, but some shops stock replacement controls for classic arcade games. Some shops manufacture controls that are more robust than originals and fit a variety of machines. Installing them takes some experimentation for novices, but are usually not too difficult to place. Monitors While both use the same basic type of tube, raster monitors are easier to service than vector monitors, as the support circuitry is very similar to that which is used in CRT televisions and computer monitors, and is typically easy to adjust for color and brightness. On the other hand, vector monitors can be challenging or very costly to service, and some can no longer be repaired due to certain parts having been discontinued years ago. Even finding a drop-in replacement for a vector monitor is a challenge today, as few were produced after their heyday in the early 1980s. CRT replacement is possible, but the process of transferring the deflection yoke and other parts from one tube neck to the other also means a long process of positioning and adjusting the parts on the CRT for proper performance, a job that may prove too challenging for the typical amateur arcade collector. On the other hand, it may be possible to retrofit other monitor technologies to emulate vector graphics. Some electronic components are stressed by the hot, cramped conditions inside a cabinet. Electrolytic capacitors dry out over time, and if a classic arcade cabinet is still using its original components, it may be near the end of its service life. A common step in refurbishing vintage electronics (of all types) is "recapping": replacing certain capacitors (and other parts) to restore, or ensure the continued safe operation of the monitor and power supplies. Because of the capacity and voltage ratings of these parts, it can be dangerous if not done properly, and should only be attempted by experienced hobbyists or professionals. If a monitor is broken, it may be easier to just source a drop-in replacement through coin-op machine distributors or parts suppliers. Wiring If a cabinet needs rewiring, some wiring kits are available over the Internet. An experienced hobbyist can usually solve most wiring problems through trial and error. Many cabinets are converted to be used to host a game other than the original. In these cases, if both games conform to the JAMMA standard, the process is simple. Other conversions can be more difficult, but some manufacturers such as Nintendo have produced kits to ease the conversion process (Nintendo manufactured kits to convert a cabinet from Classic wiring to VS. wiring). See also Arcade controller Arcade game Slot machine Video arcade Arcade system board JAMMA MAME References External links Arcade hardware Commercial machines Video game terminology
Arcade cabinet
Physics,Technology
3,849
62,396,576
https://en.wikipedia.org/wiki/Hypergraph%20removal%20lemma
In graph theory, the hypergraph removal lemma states that when a hypergraph contains few copies of a given sub-hypergraph, then all of the copies can be eliminated by removing a small number of hyperedges. It is a generalization of the graph removal lemma. The special case in which the graph is a tetrahedron is known as the tetrahedron removal lemma. It was first proved by Nagle, Rödl, Schacht and Skokan and, independently, by Gowers. The hypergraph removal lemma can be used to prove results such as Szemerédi's theorem and the multi-dimensional Szemerédi theorem. Statement Let be -uniform (every edge connects exactly r vertices) hypergraph with vertices. The hypergraph removal lemma states that for any exists such that for any -uniform, -vertices hypergraph with fewer than subhypergraphs isomorphic to it is possible to remove all copies of by removing at most edges. An equivalent formulation is that, for any hypergraph with copies of , we can eliminate all copies of from by removing hyperedges. Graph removal lemma is a special case with . Proof idea of the hypergraph removal lemma The high level idea of the proof is similar to that of graph removal lemma. We prove a hypergraph version of Szemerédi's regularity lemma (partition hypergraphs into pseudorandom blocks) and a counting lemma (estimate the number of hypergraphs in an appropriate pseudorandom block). The key difficulty in the proof is to define the correct notion of hypergraph regularity. There were multiple attempts to define "partition" and "pseudorandom (regular) blocks" in a hypergraph, but none of them are able to give a strong counting lemma. The first correct definition of Szemerédi's regularity lemma for general hypergraphs is given by Rödl et al. In Szemerédi's regularity lemma, the partitions are performed on vertices (1-hyperedge) to regulate edges (2-hyperedge). However, for , if we simply regulate -hyperedges using only 1-hyperedge, we will lose information of all -hyperedges in the middle where , and fail to find a counting lemma. The correct version has to partition -hyperedges in order to regulate -hyperedges. To gain more control of the -hyperedges, we can go a level deeper and partition on -hyperedges to regulate them, etc. In the end, we will reach a complex structure of regulating hyperedges. Proof idea for 3-uniform hypergraphs For example, we demonstrate an informal 3-hypergraph version of Szemerédi's regularity lemma, first given by Frankl and Rödl. Consider a partition of edges such that for most triples there are a lot of triangles on top of We say that is "pseudorandom" in the sense that for all subgraphs with not too few triangles on top of we have where denotes the proportion of -uniform hyperedge in among all triangles on top of . We then subsequently define a regular partition as a partition in which the triples of parts that are not regular constitute at most an fraction of all triples of parts in the partition. In addition to this, we need to further regularize via a partition of the vertex set. As a result, we have the total data of hypergraph regularity as follows: a partition of into graphs such that sits pseudorandomly on top; a partition of such that the graphs in (1) are extremely pseudorandom (in a fashion resembling Szemerédi's regularity lemma). After proving the hypergraph regularity lemma, we can prove a hypergraph counting lemma. The rest of proof proceeds similarly to that of Graph removal lemma. Proof of Szemerédi's theorem Let be the size of the largest subset of that does not contain a length arithmetic progression. Szemerédi's theorem states that, for any constant . The high level idea of the proof is that, we construct a hypergraph from a subset without any length arithmetic progression, then use graph removal lemma to show that this graph cannot have too many hyperedges, which in turn shows that the original subset cannot be too big. Let be a subset that does not contain any length arithmetic progression. Let be a large enough integer. We can think of as a subset of . Clearly, if doesn't have length arithmetic progression in , it also doesn't have length arithmetic progression in . We will construct a -partite -uniform hypergraph from with parts , all of which are element vertex sets indexed by . For each , we add a hyperedge among vertices if and only if Let be the complete -partite -uniform hypergraph. If contains an isomorphic copy of with vertices , then for any . However, note that is a length arithmetic progression with common difference . Since has no length arithmetic progression, it must be the case that , so . Thus, for each hyperedge , we can find a unique copy of that this edge lies in by finding . The number of copies of in equals . Therefore, by the hypergraph removal lemma, we can remove edges to eliminate all copies of in . Since every hyperedge of is in a unique copy of , to eliminate all copies of in , we need to remove at least edges. Thus, . The number of hyperedges in is , which concludes that . This method usually does not give a good quantitative bound, since the hidden constants in hypergraph removal lemma involves the inverse Ackermann function. For a better quantitive bound, Leng, Sah, and Sawhney proved that for some constant depending on . It is the best bound for so far. Applications The hypergraph removal lemma is used to prove the multidimensional Szemerédi theorem by J. Solymosi. The statement is that any for any finite subset of , any and any large enough, any subset of of size at least contains a subset of the form , that is, a dilated and translated copy of . Corners theorem is a special case when . It is also used to prove the polynomial Szemerédi theorem, the finite field Szemerédi theorem and the finite abelian group Szemerédi theorem. See also Graph removal lemma Szemerédi's theorem Problems involving arithmetic progressions References Hypergraphs Graph theory
Hypergraph removal lemma
Mathematics
1,338
38,763,903
https://en.wikipedia.org/wiki/Order-4%20apeirogonal%20tiling
In geometry, the order-4 apeirogonal tiling is a regular tiling of the hyperbolic plane. It has Schläfli symbol of {∞,4}. Symmetry This tiling represents the mirror lines of *2∞ symmetry. Its dual tiling represents the fundamental domains of orbifold notation *∞∞∞∞ symmetry, a square domain with four ideal vertices. Uniform colorings Like the Euclidean square tiling there are 9 uniform colorings for this tiling, with 3 uniform colorings generated by triangle reflective domains. A fourth can be constructed from an infinite square symmetry (*∞∞∞∞) with 4 colors around a vertex. The checker board, r{∞,∞}, coloring defines the fundamental domains of [(∞,4,4)], (*∞44) symmetry, usually shown as black and white domains of reflective orientations. Related polyhedra and tiling This tiling is also topologically related as a part of sequence of regular polyhedra and tilings with four faces per vertex, starting with the octahedron, with Schläfli symbol {n,4}, and Coxeter diagram , with n progressing to infinity. See also Tilings of regular polygons List of uniform planar tilings List of regular polytopes References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Apeirogonal tilings Hyperbolic tilings Isogonal tilings Isohedral tilings Order-4 tilings Regular tilings
Order-4 apeirogonal tiling
Physics
373
25,170,694
https://en.wikipedia.org/wiki/Tymovirales
Tymovirales is an order of viruses with five families. The group consists of viruses which have positive-sense, single-stranded RNA genomes. Their genetic material is protected by a special coat protein. Description Tymoviruses are mainly plant pathogens first described in 2004. They are characterised by similarities in their replication-associated polyproteins. These account for the majority of their genomic coding capacity. They are considered to form a group, phylogenetically, referred to as flexiviruses, with filamentous virions. References Bibliography External links ICTV Virus Taxonomy 2009 UniProt Taxonomy Virus orders Riboviria
Tymovirales
Biology
133
75,136,869
https://en.wikipedia.org/wiki/Takedaite
Takedaite is a borate mineral that was found in a mine in Fuka, Okayama Prefecture Japan during a mineralogical survey in the year 1994. During the survey, Kusachi and Henmi reported the occurrence of an unidentified anhydrous borate mineral closely associated with nifontovite, olshanskyite, and calcite. By the year 1994 two other minerals in the borate group M3B2O6 had been identified in nature Mg3B2O6 known as kotoite and Mn3B2O6 known as jimboite. Takedaite has the ideal chemical formula of Ca3B2O6. The mineral has been approved by the Commission on New Minerals and Mineral Names, IMA, to be named takedaite after Hiroshi Takeda, a professor at the Mineralogical Institute, University of Tokyo Japan. Occurrence Takedaite is found in association with gehlenite, spurrite, bicchulite, rankinite, kilchoanite, oyelite, and fukalite. It occurs in a vein consisting of borate minerals that developed along the boundary between crystalline limestone and the skarns. The vein it was discovered in was approximately 10 cm in thickness and is closely associated with frolovite and calcite. At the circumference of the expanded area, hydrous borates such as nifontovite, olshanskyite, sibirskite, and pentahydroborite occurred 20 cm to 50 cm in thickness. Physical properties Takedaite is a white, or pale gray mineral with a vitreous luster and colorless in thin sections. It exhibits a hardness of 4.5 on the Mohs hardness scale. The density measured by heavy liquids was 3.10(2) g•cm−3, the calculated density being 3.11 g•cm−3. Optical properties Takedaite is optically uniaxial Negative. The refractive indices are: ω = 1.726, ε = 1.630, and the Vickers microhardness was 478(429-503) kg mm−2 (25g load). The infrared spectrum of Takedaite measured by the KBr method for the region 4000 to 250 cm−1. The absorption bands at 907, 795, 710, and 618 cm−1 were in close agreement with those of the synthetic 3CaO·B2O3 reported by Wier and Schroeder (1964). The absorption bands at 1275 and 1230 cm−1 for takedaite were sharper. Chemical properties Takedaite is a borate with the presence of calcium, boron and oxygen. Chemical analysis gave CaO 71.13%, B2O3 28.41%, the H2O content was determined by ignition loss at 900 °C and was 0.14%, totaling 99.68%. The empirical formula calculated on the basis of O=6 is therefore Ca3.053B1.965O6 or more ideally Ca3B2O6. Takedaite is also easily soluble in dilute hydrochloric acid. Chemical composition X-ray crystallography The x-ray powder data for takedaite was obtained by an X-ray diffractometer using Ni-filtered Cu-Κα radiation. Single crystals were also studied using the precession and Weissenberg methods. Takedaite is in the trigonal crystal system. The space group is either Rc or R3c. The unit cell dimensions, refined by least squares from the X-ray powder diffraction data of takedaite, were: a = 8.638(1) Å, c = 11.850(2)  Å. See also List of Minerals References Natural materials Borate minerals Calcium minerals Trigonal minerals Minerals described in 1994 Wikipedia Student Program
Takedaite
Physics
796
73,480,184
https://en.wikipedia.org/wiki/Rhenium%20tetrafluoride
Rhenium tetrafluoride is a binary inorganic compound of rhenium and fluorine with the chemical formula . Synthesis Rhenium tetrafluoride can be made by the reduction of rhenium hexafluoride with hydrogen, rhenium, or sulfur dioxide: Physical properties Rhenium tetrafluoride forms blue crystals of tetragonal structure, cell parameters a = 1.012 nm, c = 1.595 nm. Rhenium tetrafluoride reacts with water, and corrodes glass when heated. References Fluorides Rhenium(IV) compounds
Rhenium tetrafluoride
Chemistry
131
4,890,286
https://en.wikipedia.org/wiki/NGC%203384
NGC 3384 is an elliptical galaxy in the constellation Leo. The galaxy was discovered by William Herschel in 1784 as part of the Herschel 400 Catalogue. The high age of the stars in the central region of NGC 3384 was confirmed after analysis of their color. More than 80% were found to be Population II stars which are over a billion years old. The supermassive black hole at the core has a mass of . Galaxy group information NGC 3384 is a member of the M96 Group, a group of galaxies in the constellation Leo that is sometimes referred to as the Leo I Group. This group also includes the Messier objects M95, M96, and M105. All of these objects are conspicuously close to each other in the sky. References External links NGC 3384, position and other data NGC 3384 SIMBAD entry Pdf document: Stellar population analysis applied to NGC 3384 Wikisky.org: SDSS image, NGC 3384 Lenticular galaxies Barred lenticular galaxies M96 Group Leo (constellation) 3384 05911 32292 17840311
NGC 3384
Astronomy
227
76,018,398
https://en.wikipedia.org/wiki/Cyberfeminism%20Index
The Cyberfeminism Index is an index of subjects related to Cyberfeminism by Mindy Seu. In 2019, it began as a Google Sheet that was then transposed onto a website. The index was published as a book in 2023. References External links Official website Feminism and the arts Internet culture Transhumanism
Cyberfeminism Index
Technology,Engineering,Biology
69
71,458,584
https://en.wikipedia.org/wiki/Hogan%20Yu
Hua-Zhong "Hogan" Yu (于化忠) is presently a professor of materials and analytical chemistry at Simon Fraser University in metro Vancouver, Canada, where he leads a research laboratory working on Surfaces and Materials for Sensing. He is also an associate editor for Analyst, the journal for Analytical and Bioanalytical Sciences from the Royal Society of Chemistry in UK, and an adjunct professor in the College of Biomedical Engineering, Taiyuan University of Technology in Shanxi, China. Education Born and raised in countryside China, Yu obtained his B.Sc. (Chemistry) from Shandong University in 1991 at an age of 20. He then received his joint M.Sc. from Shandong University and Dalian Institute of Chemical Physics (Chemical Physics) in 1994, and his Ph.D. from Peking University (Materials Chemistry, with Prof. Zhong-Fan Liu) in 1997. He did his postdoctoral research with Nobel Laureate Ahmed Zewail and electrochemist Fred Anson at the California Institute of Technology from 1997 to 1999. Career After short stays at NRC and Acadia University, Yu was appointed to the Department of Chemistry at Simon Fraser University in 2001 as an assistant professor and promoted to a tenured full professor in 2009. He is now a principal investigator of the CFI-funded Centre for Nanomaterials and Microstructures (4D LABS) and an associate member of the Department of Molecular Biology and Biochemistry, both at SFU. Yu has been perusing his cutting-edge research on solving fundamental problems that have direct impact on applied analytical science and technology. His innovation of adapting mobile electronics (office scanners, disc players, and now smartphones) for portable molecular analysis and his contribution to the de novo construction of ultrasensitive electronic biosensors for disease markers, lead to the possibility of performing many quantitative chemical analysis on-site and biomedical diagnostic tests at point-of-care settings. He has published more than 160 peer-reviewed articles and holds/filed 14 national/international patents. Awards and honors 1997 Alexander von Humboldt Fellowship 1999 National Laboratory Visiting Fellow (NSERC) 2004 Fred Beamish Award (CSC) 2008 JSPS Invitation Fellow 2011 W. Lash Miller Award (ECS Canadian Section) 2012 Tajima Prize (ISE) 2015 W.A.E. McBryde Medal (CSC) 2021 Fellow, Royal Society of Chemistry References Year of birth missing (living people) Living people Fellows of the Royal Society of Chemistry Academic staff of Simon Fraser University Peking University alumni Shandong University alumni Educators from Shandong Analytical chemists
Hogan Yu
Chemistry
520
54,248,882
https://en.wikipedia.org/wiki/NanoCLAMP
In the medical field of immunology, nanoCLAMP (CLostridal Antibody Mimetic Proteins) affinity reagents are recombinant 15 kD antibody mimetic proteins selected for tight, selective and gently reversible binding to target molecules. The nanoCLAMP scaffold is based on an IgG-like, thermostable carbohydrate binding module family 32 (CBM32) from a Clostridium perfringens hyaluronidase (Mu toxin). The shape of nanoCLAMPs approximates a cylinder of approximately 4 nm in length and 2.5 nm in diameter, roughly the same size as a nanobody (). nanoCLAMPs to specific targets are generated by varying the amino acid sequences and sometimes the length of three solvent exposed, adjacent loops that connect the beta strands making up the beta-sandwich fold, conferring binding affinity and specificity for the target. Properties nanoCLAMPs are the first antibody mimetics described to be polyol-responsive, meaning they release their targets upon exposure to a non-chaotropic salt and a polyol, such as propylene glycol. This property has been shown to be useful for purifying functional proteins and protein complexes by affinity purification. nanoCLAMPs are easily produced in the cytoplasm of E. coli, with typical yields in the range of 50 to 300 mg/L culture. Because nanoCLAMPs are devoid of cysteines, an engineered C-terminal cysteine can be used for site-directed conjugation of entities like fluorophores or resins using thiol-chemistry. Development and applications nanoCLAMPs were developed in the laboratories of Nectagen. nanoCLAMP phage display libraries were constructed that contained variations on 16 surface amino acids in three loops with function diversities of approximately 109 variants. These libraries have been screened for binders to target proteins and peptides, typically yielding between 1 and 30 unique binders to the target. Purified nanoCLAMPs containing a single C-terminal cysteine can be easily conjugated to halo-acetyl activated agarose resins under native or denaturing conditions, and the resulting thioether bond renders the resins leach-proof. Targets can be purified to apparent homogeneity in a single-step. The polyol-responsive nature of the resins allows the targets to be eluted with 0.75 M ammonium sulfate and 40% propylene glycol at pH 7.9, conditions which have been shown to preserve native structure and protein complexes. nanoCLAMPs have been produced that target green fluorescent protein (GFP), mCherry, SUMO (SMT3), NusA, avidin, NeutrAvidin, maltose-binding protein (MBP), thioredoxin 1, beta-galactosidase, SlyD, and others. Typical binding capacities of resins range from 1 to 4 mg/ml resin. Because nanoCLAMPs readily refold, nanoCLAMP resins can be regenerated multiple times using guanidinium chloride to clean the resin. References External links Nectagen, Inc., the developer Antibody mimetics
NanoCLAMP
Chemistry
674
10,423,820
https://en.wikipedia.org/wiki/Puerto%20Rico%20statistical%20areas
The currently has 13 statistical areas that have been delineated by the United States Office of Management and Budget (OMB). On July 21, 2023, the OMB delineated three combined statistical areas, six metropolitan statistical areas, and four micropolitan statistical areas in . As of 2023, the largest of these is the San Juan-Bayamón, PR CSA, comprising the area around the municipality of San Juan, the capital and largest city of Puerto Rico. All statistical areas Combined and primary statistical areas Primary statistical areas (PSAs) include all combined statistical areas and any core-based statistical area that is not a constituent of a combined statistical area. Of the 13 statistical areas of Puerto Rico, three are PSAs comprising its three combined statistical areas. Metropolitan statistical areas This sortable table lists the six metropolitan statistical areas (MSAs) of Puerto Rico including: The MSA rank by population as of July 1, 2023, as estimated by the United States Census Bureau The MSA name as designated by the United States Office of Management and Budget The MSA population as of July 1, 2023, as estimated by the United States Census Bureau The MSA population as of April 1, 2020, as enumerated by the 2020 United States census The percent MSA population change from April 1, 2020, to July 1, 2023 The combined statistical area (CSA) if it is designated and the MSA is a component Micropolitan statistical area The following sortable table lists the 4 μSAs (USAs) of Puerto Rico with the following information: The μSA rank by population as of July 1, 2023, as estimated by the United States Census Bureau The μSA name as designated by the United States Office of Management and Budget The μSA population as of July 1, 2023, as estimated by the United States Census Bureau The μSA population as of April 1, 2020, as enumerated by the 2020 United States census The percent USA population change from April 1, 2020, to July 1, 2023 The combined statistical area (CSA) if the MSA is a component See also Geography of Puerto Rico Demographics of Puerto Rico Notes References External links Office of Management and Budget United States Census Bureau Statistical Areas Of Puerto Rico Statistical Areas Of Puerto Rico Statistical Areas Of Puerto Rico United States statistical areas
Puerto Rico statistical areas
Mathematics
472
27,006,114
https://en.wikipedia.org/wiki/Symposium%20on%20Operating%20Systems%20Principles
The Symposium on Operating Systems Principles (SOSP), organized by the Association for Computing Machinery (ACM), is one of the most prestigious single-track academic conferences on operating systems. Before 2023, SOSP was held every other year, alternating with the conference on Operating Systems Design and Implementation (OSDI); starting 2024, SOSP began to be held every year. The first SOSP was held in 1967. It is sponsored by the ACM's Special Interest Group on Operating Systems (SIGOPS). History The inaugural conference was held in Gatlinburg, Tennessee on 1–4 October 1967 at the Mountain View Hotel. There were fifteen papers in total, of which three presentations were in the Computer Networks and Communications session. Larry Roberts presented his plan for the ARPANET, a computer network for resource sharing, which at that point was based on Wesley Clark's proposal for a message switching network. Jack Dennis from MIT discussed the merits of a more general data communications network. Roger Scantlebury, a member of Donald Davies' team from the UK National Physical Laboratory, presented their research on packet switching in a high-speed computer network, and referenced the work of Paul Baran. At this seminal meeting, Scantlebury proposed packet switching for use in the ARPANET and persuaded Roberts the economics were favorable to message switching. The ARPA team enthusiastically received the idea and Roberts incorporated it into the ARPANET design. In total, 29 conferences have been held, seven of which were outside the USA. The first conference held outside the USA was in Saint-Malo, France in 1997. Other countries to have hosted the conference are Canada, the UK, Portugal, China and Germany. List of conferences From 1967 to 2023, the conferences were held every two years, with the first SOSP conference taking place in Gatlinburg, Tennessee. Beginning in 2024, SOSP the conference is held every year. See also List of computer science conferences References External links http://sosp.org/ https://dl.acm.org/conference/sosp Computer science conferences Association for Computing Machinery conferences
Symposium on Operating Systems Principles
Technology
439
610,419
https://en.wikipedia.org/wiki/Waybill
A waybill is a document issued by a carrier giving details and instructions relating to the shipment of a consignment of cargo. Typically it will show the names of the consignor and consignee, the point of origin of the consignment, its destination, and route. Most freight forwarders and trucking companies use an in-house waybill called a house bill. These typically contain "conditions of contract of carriage" terms on the back of the form that cover limits to liability and other terms and conditions. A waybill is similar to a courier's receipt, which contains the details of the consignor and the consignee and the point of origin and the destination. Air waybills Most airlines use a different form called an air waybill which lists additional items such as airport of destination, flight number, and time. Sea waybills The UK Carriage of Goods by Sea Act 1992 s.1(1) applies to: bills of lading s.1(2), sea waybills s.1(3), and ships' delivery orders s.1(4), ... whether in paper or electronic form s.1(5). Under s.1(3) of the Act, a sea waybill is: "any document which is not a bill of lading but is such a receipt for goods as contains a contract for the carriage of goods by sea; and identifies the person to whom delivery of the goods is to be made by the carrier in accordance with that contract". s.2 continues: "...a person who becomes the person who (without being an original party to the contract of carriage) is the person to whom delivery of the goods to which a sea waybill relates is to be made by the carrier in accordance with that contract ... shall (by virtue of becoming the person to whom delivery is to be made) have transferred to and vested in him all rights of suit under the contract of carriage as if he had been a party to the contract of carriage". Note: the UK's Contracts (Rights of Third Parties) Act 1999 does NOT apply to contracts for the carriage of goods by sea. See also Carriage of goods References Freight transport Business law
Waybill
Physics
466
26,124,772
https://en.wikipedia.org/wiki/Fort%20de%20Liouville
The Fort de Liouville, also known as Fort Stengel, located between the communes of Saint-Agnant-sous-les-Côtes and Saint-Julien-sous-les-Côtes, near the town of Commercy in the Meuse departement of France, is one of the forts built at the end of the 19th century to defend the valley of the Meuse. The fort was located on what was then the French frontier facing the German-occupied province of Lorraine. The Fort de Liouville was located between the Fort de Gironville and the Camp des Romains. History In 1870, France was partly occupied by the Prussian army. As a result of this defeat, the Séré de Rivières system of fortifications was planned and constructed to defend the nation. Construction started in 1876 on the roughly rectangular fort with a garrison of 691 troops. Work was completed in 1880, at a cost of 2,108,000 francs. The fort was updated between 1892 and 1910 with a protected magazine, replacement of caponiers with counterscarps, and preparations for a Mougin turret with two 155 mm guns. The Mougin turret and guns were installed in 1914, along with two machine gun turrets, a 75mm gun turret and two observation cloches. The fort was armed with a total of 40 artillery pieces in 1914. It was the only fort in the Hauts de Meuse line to receive concrete cover, but did not get concrete-protected barracks. The fort dominates the Woëvre valley and blocks the Marbotte and Lérouville gaps in the Hauts de Meuse, watching over the rail line to Lérouville. Unusually, the gorge, or entry side of the fort faces German territory, since the fort is built on a west-facing escarpment. Fort de Liouville was bombarded by German artillery for a large portion of the war, with the heaviest fire between 22 September and 16 October 1914. The Mougin turret was hit by a 305mm German shell, but continued to fire with one gun until 28 September. The north ammunition magazine was penetrated by shellfire. The 75mm turret fired despite considerable trouble with the mechanism and numerous casualties until the fort was evacuated, and the turret was jammed by a direct 305mm hit. Infantry continued to hold the area, and the fort was not taken. The fort suffered significant damage to its casemates. It served as an observatory facing German lines and resting place for the regiments taken out of combat. In 1938 a machine gun turret was removed and mounted at the Bastion de la Riene at the Citadel of Verdun. The fort displays an unusual degree of attention to design, with window frames resembling Gothic tracery. The fort is maintained by an association for its preservation. Batterie de Saint-Agnant A triangular position built 1878-1880 as an annex to Liouville, with three 120mm guns prior to 1910. The position also had two Pamart casemate/cupolas for machine guns. It was severely bombarded during the first world war and is now abandoned. References External links Fort de Liouville site Fort de Liouville at fortiff.be Le fort de Liouville ou fort Stengel at fortiff' sere Séré de Rivières system World War I museums in France
Fort de Liouville
Engineering
681
59,431,451
https://en.wikipedia.org/wiki/Fraunhofer-Center%20for%20High%20Temperature%20Materials%20and%20Design%20HTL
The Fraunhofer Center for High Temperature Materials and Design is a research center of the Fraunhofer Institute for Silicate Research in Würzburg, a research institute of the Fraunhofer Society. It predominantly conducts research in high temperature technologies energy-efficient heating processes and thus contributes to sustainable technological progress. It is headquartered in Bayreuth and has additional locations in Würzburg and Münchberg. History The centre was founded in 2012 with the aim of pooling the ceramics research of the Fraunhofer ISC. Its research building in Bayreuth was opened in 2015 and funded by the Bavarian Ministry for Economic Affairs, the German Federal Ministry of Education and Research, and the European Regional Development Fund. In 2014, the Fraunhofer Application Center for Textile Fiber Ceramics (TFK) was founded in cooperation with the Hof University of Applied Sciences. Since 2017, the premises of the Fraunhofer-Center HTL in Bayreuth are being extended by a technical center with a fiber pilot plant, which is to be completed in late 2019. The costs for this plant amount to 20 Million Euros, which are predominantly taken over by the Bavarian Ministry for Economic Affairs and the German Federal Ministry of Education and Research. The plant itself is a one-of-its-kind in Europe and its goal is to open production of ceramic fibers in Europe. Research areas The Fraunhofer-Center HTL has two business areas: Thermal Process Technology and CMC's (Ceramic matrix composites). One of the applications of CMC's are, for instance, the production of ceramic brakes, which currently are expensive in production, and the Fraunhofer-Center HTL is currently researching ways to reduce costs therein. In the CMC business field, HTL has a closed manufacturing chain from fibre development to textile fibre processing to matrix construction to finishing and coating of CMC components. CMC are characterised by high operating temperatures, corrosion resistance and damage tolerance and are therefore used to improve high-temperature processes. In addition, processes such as 3D printing are also available at the Fraunhofer Centre HTL for the production of metal and ceramic components with complex geometries. To test high-temperature materials and optimise their manufacturing processes, the Fraunhofer Centre HTL is developing ThermoOptic Measuring (TOM) furnaces. Materials and components can also be characterised using various non-destructive and mechanical as well as thermal testing methods. Focus of work Materials Material design: Calculation of the application properties of multiphase materials Ceramics: development of oxide, non-oxide and silicate ceramics along the entire manufacturing chain Metal-ceramic composites: Development of metal components and composites Ceramic fibres: Development of ceramic fibres from laboratory scale to pilot scale Ceramic coatings: Development and characterisation of liquid coating varnishes on behalf of customers and for sampling purposes Components Component design: Design of components made of ceramics, metals or composites using finite element (FE) modelling CMC components: Design and fabrication of CMC components using carbon, silicon carbide or oxide ceramic fibres 3D printing: manufacturing of prototypes and small series from ceramics, metals or metal-ceramic composites Manufacturing processes Textile technology: development of textile processing methods for inorganic fibres including sampling Heat processes: In-situ characterisation of the behaviour of solids and melts during the heating process as well as process optimisation Application firings: Conducting test firings and application firings in defined atmospheres Characterisation Materials testing: Non-destructive, mechanical and thermal measurement of the composition, microstructure and application properties of materials ThermoOptic Measurement (TOM): Simulation of industrial heat treatment processes in the temperature range from room temperature to over 2000 °C and in all relevant furnace atmospheres Industrial furnace analysis: recording of the energy balance as well as the temperature and atmosphere distribution in the production furnace Infrastructure Location Bayreuth At the Fraunhofer Centre HTL in Bayreuth, 80 office workplaces are available on an area of approx. 600 m2. The technical centre compromises 15 laboratories and halls on an area of approx. 2000 m2. Specialised technical equipment is in use there. These include: approx. 40 different industrial furnaces twelve thermo-optical measureing systems (TOM) specially developed at the HTL Stereolithography printers for ceramic components Powder bed printers for ceramics and metals CMC processing equipment equipment for non-destructive testing (computer tomography with a 225 kV and 450 kV radiation source, terahertz technology, ultrasound diagnostics, thermography) five-axis machining centre laser sintering system The fibre pilot plant opened at the Bayreuth site in 2019 increases the pilot plant area of the Fraunhofer Centre HTL by approx. 1200 m2 and is used for the production of ceramic reinforcement fibres and the development of new high-temperature resistant fibre types. Location Würzburg In the premises of the parent institute Fraunhofer ISC in Würzburg, the HTL has 20 office workstations, three laboratories and a pilot plant with an area of 630 m2. The facilities and spinning towers operated in Würzburg are used to develop ceramic fibres and ceramic coatings on a laboratory and pilot plant scale. Location Münchberg On the site of the Institute for Material Sciences ifm at Hof University of Applied Sciences, the Fraunhofer Centre HTL has 14 office workplaces as well as four laboratories and four pilot plants with an area of over 5,500 m2. A total of ten weaving looms of different sizes and types, a variable braiding machine, a double rapier weaving machine with single thread control and numerous systems for testing fibres, rovings and textiles are used. Cooperations Fraunhofer-Allianz AdvanCer Fraunhofer-Allianz Energie Fraunhofer-Allianz Leichtbau Fraunhofer-Allianz Textil References External links Fraunhofer-Center for High Temperature Materials and Design HTL Fraunhofer-Institute for Silicate Research Fraunhofer-Center for High Temperature Materials as part of the FUDIPO Project https://www.cem-wave.eu/ Organisations based in Germany Ceramics Ceramic materials Ceramic engineering Research and development in Germany Research in Germany
Fraunhofer-Center for High Temperature Materials and Design HTL
Engineering
1,291
2,549,843
https://en.wikipedia.org/wiki/Thallium%28III%29%20hydroxide
Thallium(III) hydroxide, , also known as thallic hydroxide, is a hydroxide of thallium. It is a white solid. Thallium(III) hydroxide is a very weak base; it dissociates to give the thallium(III) ion, , only in strongly acidic conditions. Preparation Thallium(III) hydroxide can be produced by the reaction of thallium(III) chloride with sodium hydroxide or the electrochemical oxidation of in alkaline conditions. References Hydroxides Thallium(III) compounds
Thallium(III) hydroxide
Chemistry
123
44,559,112
https://en.wikipedia.org/wiki/Stokes%20approximation%20and%20artificial%20time
This article provides an error analysis of time discretization applied to spatially discrete approximation of the stationary and nonstationary Navier-Stokes equations. The nonlinearity of the convection term is the main problem in solving a stationary or nonstationary Navier-Stokes equation or Euler equation problems. Stoke incorporated ‘the method of artificial compressibility’ to solve these problems. Navier-Stokes equation Stokes approximation The Stokes approximation is developed from the Navier-Stokes equations by omission of the convective term. For small Reynolds numbers in the incompressible flow, this approximation is more useful. Then incompressible Navier Stokes equation can be written as- . Here linear diffusion term dominates the convection term. In the stationary problem neglecting the convection term, we get- Many theorems can be proved by using this process. The main problem with the solution of the incompressible flow equation is the decoupling of the continuity and momentum equation due to the absence of pressure or density term. Chorin proposed the solution for this problem of the pressure decoupling; this approach is called artificial compressibility. In the above equation stoke assume that at, non-stationary Navier Stokes problem converge towards the solution of the correspondent stationary problem. This solution will not depend upon the function . If this is used for the above equation consisting of Navier stokes equation and continuity equations with time derivative of pressure, then the solution will be same as the stationary solution of the original Navier Stoke problem. This process also introduce the new term artificial time as t→∞. Artificial compressibility method is combined with a dual time stepping procedure which involves iteration in pseudo-time within each physical time step. This guarantees a convergence towards the solution for the incompressible flow problem. References External links https://books.google.com/books?isbn=3527627979 Fluid dynamics
Stokes approximation and artificial time
Chemistry,Engineering
391
4,083,646
https://en.wikipedia.org/wiki/Electrofusion
Electrofusion is a method of joining MDPE, HDPE and other plastic pipes using special fittings that have built-in electric heating elements which are used to weld the joint together. The pipes to be joined are cleaned, inserted into the electrofusion fitting and then alignment clamps and a voltage (typically 40V) is applied for a fixed time depending on the fitting in use. The built in heater coils then melt the inside of the fitting and the outside of the pipe wall, which weld together producing a very strong homogeneous joint. The assembly is then left to cool for a specified time. Electrofusion welding is beneficial because it does not require the operator to use dangerous or sophisticated equipment. After some preparation, the electrofusion welder will guide the operator through the steps to take. Welding heat and time is dependent on the type and size of the fitting. All electrofusion fittings are not created equal – precise positioning of the energising coils of wire in each fitting ensures uniform melting for a strong joint and the minimisation of welding and cooling time. The operator must be qualified according to the local and national laws. In Australia, an electrofusion course can be done within 8 hours. Electrofusion welding training focuses on the importance of accurately fusing EF fittings. Both manual and automatic methods of calculating electrofusion time gives operators the skills they need in the field. There is much to learn about the importance of preparation, timing, pressure, temperature, cool down time and handling, etc. Training and certification are very important in this field of welding, as the product can become dangerous under certain circumstances. There has been cases of major harm and death, including when molten polyethylene spurts out of the edge of a mis-aligned weld, causing skin burns. Another case was due to a tapping saddle being incorrectly installed on a gas line, causing the death of the two welders in the trench due to gas inhalation. There are many critical parts to electrofusion welding that can cause weld failures, most of which can be greatly reduced by using welding clamps, and correct scraping equipment. To keep their qualification current, a trained operator can get their fitting tested, which involves cutting open the fitting and examining the integrity of the weld. References Piping Plumbing
Electrofusion
Chemistry,Engineering
470
64,368,349
https://en.wikipedia.org/wiki/Alexios%20Polychronakos
Alexios Polychronakos (born 1959, in Greece) is a theoretical physicist. He studied electrical engineering at the National Technical University of Athens (diploma in 1982) and did graduate work in theoretical physics at the California Institute of Technology (Ph.D. 1987 ) under the supervision of John Preskill. Polychronakos is a professor of physics at the City College of New York. He is considered an authority on quantum field theory, quantum statistics, anyons, integrable systems, and quantum fluids, having authored over 110 refereed papers. He is a Fellow of the American Physical Society (2012), cited for "For important contributions to the field of statistical mechanics and integrable systems, including the Polychronakos model and the exchange operator formalism, fractional statistics, matrix model description of quantum Hall systems as well as other areas such as noncommutative geometry". References External links Polychronakos' profile at CUNY Inspire profile Google scholar profile 20th-century Greek physicists 21st-century American physicists California Institute of Technology alumni Living people Particle physicists Fellows of the American Physical Society Theoretical physicists Mathematical physicists 1959 births
Alexios Polychronakos
Physics
238
286,906
https://en.wikipedia.org/wiki/NoteCards
NoteCards was a hypertext-based personal knowledge base system developed at Xerox PARC by Randall Trigg, Frank Halasz and Thomas Moran in 1984. NoteCards was developed after Trigg's pioneering 1983 Ph.D. thesis on hypertext while at the University of Maryland College Park. NoteCards was built to model four basic kinds of objects: notecards, links, browser card, and a filebox. Each window is an analog of a cue card; window sizes may vary, but contents cannot scroll. Local and global maps are available through browsers. There are over 40 different nodes which support various media. NoteCards was implemented in LISP on D-machine workstations from Xerox which used large, high-resolution displays. The NoteCards interface is event-driven. One interesting feature of NoteCards is that authors may use LISP commands to customize or create entirely new node types. The powerful programming language allows almost complete customization of the entire NoteCards work environment. Availability NoteCards was available commercially from the Common Lisp software vendor Venue, compiled for Solaris 2.5 and 7 (untested on later versions) and Linux x86 with the X Window System. References External links Online version of NoteCards Xerox PARC archive of Lisp code that includes the 1984–1989 version of NoteCards Source code for NoteCards 2.0, patched to run in modern emulators Hypertext HyperCard products
NoteCards
Technology
298
34,692,740
https://en.wikipedia.org/wiki/Acetonide
In organic chemistry, an acetonide is the functional group composed of the cyclic ketal of a diol with acetone. The more systematic name for this structure is an isopropylidene ketal. Acetonide is a common protecting group for 1,2- and 1,3-diols. The protecting group can be removed by hydrolysis of the ketal using dilute aqueous acid. Example The acetonides of small di- and triols, as well as many sugars and sugar alcohols, are common. The hexaol mannitol reacts with 2,2-dimethoxypropane to give the bis-acetonide, which oxidizes to give the acetonide of glyceraldehyde: (CHOHCHOHCH2OH)2 + 2 (MeO)2CMe2 → (CHOHCHCH2O2CMe2)2 + 4 MeOH (CHOHCHOCH2OCMe2)2 + [O] → 2 OCHCHCH2O2CMe2 + H2O An example of its use as a protecting group in a complex organic synthesis is the Nicolaou Taxol total synthesis. It is a common protecting group for sugars and sugar alcohols, a simple example being solketal. The acetonides of corticosteroid are used in dermatology, because their increased lipophilicity leads to better penetration into the skin. Fluclorolone acetonide Fluocinolone acetonide Triamcinolone acetonide See also Acetophenide Acroleinide Aminobenzal Cyclopentanonide Pentanonide References
Acetonide
Chemistry
351
12,534,781
https://en.wikipedia.org/wiki/Wildlife%20of%20Western%20Sahara
The wildlife of Western Sahara is composed of its flora and fauna. It has 40 species of mammals and 207 species of birds. Fauna Mammals Birds References Western Sahara
Wildlife of Western Sahara
Biology
33
895,348
https://en.wikipedia.org/wiki/Tetraoctylammonium%20bromide
Tetraoctylammonium bromide (TOAB or TOABr) is a quaternary ammonium compound with the chemical formula: [CH3(CH2)7]4N Br. It is generally used as a phase transfer catalyst between an aqueous solution and an organic solution. References Quaternary ammonium compounds
Tetraoctylammonium bromide
Chemistry
71
6,022,368
https://en.wikipedia.org/wiki/Medium-density%20polyethylene
Medium-density polyethylene (MDPE) is a type of polyethylene defined by a density range of 0.926–0.940 g/cm3. It is less dense than HDPE, which is more common. MDPE can be produced by chromium/silica catalysts, Ziegler-Natta catalysts or metallocene catalysts. MDPE has good shock and drop resistance properties. It also is less notch sensitive than HDPE. Stress cracking resistance is better than that of HDPE. MDPE is typically used in gas pipes and fittings, sacks, shrink film, packaging film, carrier bags, and screw closures. In the United Kingdom, black (or blue) MDPE is often used for water and waste water plumbing, and may also be referred to as 'black alkathene.' See also Cross-linked polyethylene (PEX) High-density polyethylene (HDPE) Electrofusion Linear low-density polyethylene (LLDPE) Low-density polyethylene (LDPE) Plastic recycling Stretch wrap Ultra-high-molecular-weight polyethylene (UHMWPE) References External links Medium Density Polyethylene Specialty Rinse Tanks MDPE UK Supplier Packing and Shipping Supplies Polyolefins Plastics Packaging materials
Medium-density polyethylene
Physics,Chemistry
279
49,222,949
https://en.wikipedia.org/wiki/NGC%204762
NGC 4762 is an edge-on lenticular galaxy in the constellation Virgo. It is at a distance of 60 million light years and is a member of the Virgo Cluster. The edge-on view of this particular galaxy, originally considered to be a barred spiral galaxy, makes it difficult to determine its true shape, but it is considered that the galaxy consists of four main components — a central bulge, a bar, a thick disc and an outer ring. The galaxy's disc is asymmetric and warped, which could be explained by NGC 4762 merging with a smaller galaxy in the past. The remains of this former companion may then have settled within NGC 4762's disc, redistributing the gas and stars and so changing the disc's morphology. NGC 4762 contains a Liner-type active galactic nucleus, a highly energetic central region. This nucleus is detectable due to its particular spectral line emission, allowing astronomers to measure the composition of the region. NGC 4762 forms a non-interacting pair with the galaxy NGC 4754. References External links NED entry on NGC 4762 Lenticular galaxies LINER galaxies Virgo (constellation) Virgo Cluster 4762 08016 43733
NGC 4762
Astronomy
254
13,637
https://en.wikipedia.org/wiki/Hausdorff%20space
In topology and related branches of mathematics, a Hausdorff space ( , ), T2 space or separated space, is a topological space where distinct points have disjoint neighbourhoods. Of the many separation axioms that can be imposed on a topological space, the "Hausdorff condition" (T2) is the most frequently used and discussed. It implies the uniqueness of limits of sequences, nets, and filters. Hausdorff spaces are named after Felix Hausdorff, one of the founders of topology. Hausdorff's original definition of a topological space (in 1914) included the Hausdorff condition as an axiom. Definitions Points and in a topological space can be separated by neighbourhoods if there exists a neighbourhood of and a neighbourhood of such that and are disjoint . is a Hausdorff space if any two distinct points in are separated by neighbourhoods. This condition is the third separation axiom (after T0 and T1), which is why Hausdorff spaces are also called T2 spaces. The name separated space is also used. A related, but weaker, notion is that of a preregular space. is a preregular space if any two topologically distinguishable points can be separated by disjoint neighbourhoods. A preregular space is also called an R1 space. The relationship between these two conditions is as follows. A topological space is Hausdorff if and only if it is both preregular (i.e. topologically distinguishable points are separated by neighbourhoods) and Kolmogorov (i.e. distinct points are topologically distinguishable). A topological space is preregular if and only if its Kolmogorov quotient is Hausdorff. Equivalences For a topological space , the following are equivalent: is a Hausdorff space. Limits of nets in are unique. Limits of filters on are unique. Any singleton set is equal to the intersection of all closed neighbourhoods of . (A closed neighbourhood of is a closed set that contains an open set containing .) The diagonal is closed as a subset of the product space . Any injection from the discrete space with two points to has the lifting property with respect to the map from the finite topological space with two open points and one closed point to a single point. Examples of Hausdorff and non-Hausdorff spaces Almost all spaces encountered in analysis are Hausdorff; most importantly, the real numbers (under the standard metric topology on real numbers) are a Hausdorff space. More generally, all metric spaces are Hausdorff. In fact, many spaces of use in analysis, such as topological groups and topological manifolds, have the Hausdorff condition explicitly stated in their definitions. A simple example of a topology that is T1 but is not Hausdorff is the cofinite topology defined on an infinite set, as is the cocountable topology defined on an uncountable set. Pseudometric spaces typically are not Hausdorff, but they are preregular, and their use in analysis is usually only in the construction of Hausdorff gauge spaces. Indeed, when analysts run across a non-Hausdorff space, it is still probably at least preregular, and then they simply replace it with its Kolmogorov quotient, which is Hausdorff. In contrast, non-preregular spaces are encountered much more frequently in abstract algebra and algebraic geometry, in particular as the Zariski topology on an algebraic variety or the spectrum of a ring. They also arise in the model theory of intuitionistic logic: every complete Heyting algebra is the algebra of open sets of some topological space, but this space need not be preregular, much less Hausdorff, and in fact usually is neither. The related concept of Scott domain also consists of non-preregular spaces. While the existence of unique limits for convergent nets and filters implies that a space is Hausdorff, there are non-Hausdorff T1 spaces in which every convergent sequence has a unique limit. Such spaces are called US spaces. For sequential spaces, this notion is equivalent to being weakly Hausdorff. Properties Subspaces and products of Hausdorff spaces are Hausdorff, but quotient spaces of Hausdorff spaces need not be Hausdorff. In fact, every topological space can be realized as the quotient of some Hausdorff space. Hausdorff spaces are T1, meaning that each singleton is a closed set. Similarly, preregular spaces are R0. Every Hausdorff space is a Sober space although the converse is in general not true. Another property of Hausdorff spaces is that each compact set is a closed set. For non-Hausdorff spaces, it can be that each compact set is a closed set (for example, the cocountable topology on an uncountable set) or not (for example, the cofinite topology on an infinite set and the Sierpiński space). The definition of a Hausdorff space says that points can be separated by neighborhoods. It turns out that this implies something which is seemingly stronger: in a Hausdorff space every pair of disjoint compact sets can also be separated by neighborhoods, in other words there is a neighborhood of one set and a neighborhood of the other, such that the two neighborhoods are disjoint. This is an example of the general rule that compact sets often behave like points. Compactness conditions together with preregularity often imply stronger separation axioms. For example, any locally compact preregular space is completely regular. Compact preregular spaces are normal, meaning that they satisfy Urysohn's lemma and the Tietze extension theorem and have partitions of unity subordinate to locally finite open covers. The Hausdorff versions of these statements are: every locally compact Hausdorff space is Tychonoff, and every compact Hausdorff space is normal Hausdorff. The following results are some technical properties regarding maps (continuous and otherwise) to and from Hausdorff spaces. Let be a continuous function and suppose is Hausdorff. Then the graph of , , is a closed subset of . Let be a function and let be its kernel regarded as a subspace of . If is continuous and is Hausdorff then is a closed set. If is an open surjection and is a closed set then is Hausdorff. If is a continuous, open surjection (i.e. an open quotient map) then is Hausdorff if and only if is a closed set. If are continuous maps and is Hausdorff then the equalizer is a closed set in . It follows that if is Hausdorff and and agree on a dense subset of then . In other words, continuous functions into Hausdorff spaces are determined by their values on dense subsets. Let be a closed surjection such that is compact for all . Then if is Hausdorff so is . Let be a quotient map with a compact Hausdorff space. Then the following are equivalent: is Hausdorff. is a closed map. is a closed set. Preregularity versus regularity All regular spaces are preregular, as are all Hausdorff spaces. There are many results for topological spaces that hold for both regular and Hausdorff spaces. Most of the time, these results hold for all preregular spaces; they were listed for regular and Hausdorff spaces separately because the idea of preregular spaces came later. On the other hand, those results that are truly about regularity generally do not also apply to nonregular Hausdorff spaces. There are many situations where another condition of topological spaces (such as paracompactness or local compactness) will imply regularity if preregularity is satisfied. Such conditions often come in two versions: a regular version and a Hausdorff version. Although Hausdorff spaces are not, in general, regular, a Hausdorff space that is also (say) locally compact will be regular, because any Hausdorff space is preregular. Thus from a certain point of view, it is really preregularity, rather than regularity, that matters in these situations. However, definitions are usually still phrased in terms of regularity, since this condition is better known than preregularity. See History of the separation axioms for more on this issue. Variants The terms "Hausdorff", "separated", and "preregular" can also be applied to such variants on topological spaces as uniform spaces, Cauchy spaces, and convergence spaces. The characteristic that unites the concept in all of these examples is that limits of nets and filters (when they exist) are unique (for separated spaces) or unique up to topological indistinguishability (for preregular spaces). As it turns out, uniform spaces, and more generally Cauchy spaces, are always preregular, so the Hausdorff condition in these cases reduces to the T0 condition. These are also the spaces in which completeness makes sense, and Hausdorffness is a natural companion to completeness in these cases. Specifically, a space is complete if and only if every Cauchy net has at least one limit, while a space is Hausdorff if and only if every Cauchy net has at most one limit (since only Cauchy nets can have limits in the first place). Algebra of functions The algebra of continuous (real or complex) functions on a compact Hausdorff space is a commutative C*-algebra, and conversely by the Banach–Stone theorem one can recover the topology of the space from the algebraic properties of its algebra of continuous functions. This leads to noncommutative geometry, where one considers noncommutative C*-algebras as representing algebras of functions on a noncommutative space. Academic humour Hausdorff condition is illustrated by the pun that in Hausdorff spaces any two points can be "housed off" from each other by open sets. In the Mathematics Institute of the University of Bonn, in which Felix Hausdorff researched and lectured, there is a certain room designated the Hausdorff-Raum. This is a pun, as Raum means both room and space in German. See also , a Hausdorff space X such that every continuous function has a fixed point. Notes References Separation axioms Properties of topological spaces
Hausdorff space
Mathematics
2,187
43,849,047
https://en.wikipedia.org/wiki/Askold%20Khovanskii
Askold Georgievich Khovanskii (; born 3 June 1947, Moscow) is a Russian and Canadian mathematician currently a professor of mathematics at the University of Toronto, Canada. His areas of research are algebraic geometry, commutative algebra, singularity theory, differential geometry and differential equations. His research is in the development of the theory of toric varieties and Newton polyhedra in algebraic geometry. He is also the inventor of the theory of fewnomials, and the Bernstein–Khovanskii–Kushnirenko theorem is named after him. He obtained his Ph.D. from Steklov Mathematical Institute in Moscow under the supervision of Vladimir Arnold. In his Ph.D. thesis, he developed a topological version of Galois theory. He studies the theory of Newton–Okounkov bodies, or Okounkov bodies for short. Among his graduate students are Olga Gel'fond, Feodor Borodich, H. Petrov-Tan'kin, Kiumars Kaveh, Farzali Izadi, Ivan Soprunov, Jenya Soprunova, Vladlen Timorin, Valentina Kirichenko, Sergey Chulkov, V. Kisunko, Mikhail Mazin, O. Ivrii, K. Matveev, Yuri Burda, and J. Yang. In 2014, he received the Jeffery–Williams Prize of the Canadian Mathematical Society for outstanding contributions to mathematical research in Canada. References External links Homepage of Askold Khovanskii at the University of Toronto Moscow Mathematical Journal volume in honor of Askold Khovanskii (Mosc. Math. J., 7:2 (2007), 169–171) Askoldfest 1947 births Living people Russian mathematicians Canadian mathematicians Moscow State University alumni Steklov Institute of Mathematics alumni Academic staff of the Independent University of Moscow Academic staff of the University of Toronto Geometers Russian people of Lithuanian descent Algebraic geometers Soviet mathematicians
Askold Khovanskii
Mathematics
406
26,274,346
https://en.wikipedia.org/wiki/Aggressionism
Aggressionism is a philosophical theory that the only real cause of war is human aggression, which refers to the "general tendency to attack members of one's species." It is argued that aggression is a natural response to defend vital interests such as territory, family, or identity if threatened. This theory has dominated much evolutionary thought about human nature. Many evolutionary biologists discount aggressionism as it promotes human extinction through war. The idea is that if homicide was the norm, the human species would have wiped itself out millions of years ago. There is also the claim that aggression is not a universal instinct in the animal kingdom. However, some sources note that aggression serves the animal kingdom well since it brings the balanced distribution of animals of the same species over the available environment and that it can be viewed as a universal, externally directed drive that is possibly connected to a survival instinct. Concept The concept of aggressionism is based on the root word "aggression." In this particular concept, aggression occurs in all species as a form of protection of their own or of their territory in order to keep their young safe. However, though most species protect against their predators, some also protect from their own. For example, lions are very territorial and fight other grown, male lions to keep their status as the alpha. Similar to humans, if an invader attacks, human instinct leads them to defend oneself and fend them off. When necessary, for the sake of survival, most species become aggressive to get food to survive. Yet, aggressionism isn't the same as aggression. Aggressionism is the concept of aggression particularly made for humans, as it is more complex than simply wanting to survive. This is demonstrated in one of its definitions describing it as "the action of a state in violating by force the rights of another state, particularly its territorial rights; an unprovoked offensive, attack, invasion, or the like..." or a "hostile or destructive mental attitude or behavior" which leads to conflict and eventually, bloodshed. Aggressionism specifies human nature in its hostile form when ideologies of multiple humans do not coincide with each other. However, the form of hostility that humans convey isn't direct in terms of street fights. This form of aggression directs humans in a composed manner between the leaders of nations or organizations in which leads to war. In this perspective, the hostility is contained due to the persons having respect in each other. Rather than being savage like animals, humans use their intellect to defeat their opponent in war—therefore, placing pride, greed, and belief in their own skill to lead their nation to victory. Before a war starts, there is always disagreement between the leaders. Never is there a leader raging at the other. Calmly, they would always say that it is unfortunate that the two nations disagree and would go back to their respective countries to declare war. Cause of War Although it has been directly stated, that aggressionism is a philosophy theory that humans are the cause of war, there are more direct reasons for conflicts to escalate to war. Aggressionism is a theory that describes complex behavior of human nature that involves strong beliefs in one's own ideology. It is a description of people who cannot see the views of others and would only see their own as the only right one in the world. Throughout history, there have been a number of people who were like this and had caused war. Examples of Political Leaders Who Displayed Aggressionism Adolf Hitler is a primary example of a person who displayed aggressionism. In his time of reign, he installed a government that practiced fascism which is a form of statism. This type of government is a form of radical authoritarian nationalism. This type of nation is ruled by dictatorial power with overwhelming control over all the aspects of the country including the economy, society, and its beliefs. Hitler had a strong belief that the Jewish were in fact, the cause of Germany's loss in WWI causing his ideology to circulate around a hate for the Jewish people. Therefore, his aggressionism had started to take form in war through the initial invasion of Poland. Joseph Stalin is another example that also displayed aggressionism. However, Stalin's aggressionism instead was more subtle than Hitler's. Stalin believed that with his dictatorial power, he would be able to bring Russia in its time out of its famine and spread his ideology of communism towards the rest of the world. During Stalin's reign, he transformed Russia to "an industrial and military superpower." He had created programs to boost the food supply and boost its economy, however this had killed millions. After WWII, the Soviet Union and the US had become superpowers and tension grew between these countries which started the Cold War. To gain advantage over the other, Stalin attempted to spread communism towards other states, countries, and nations. Thus, the subtle aggressionism. Unlike Hitler, he had helped spread his ideologies to other leaders, including China's Mao Zedong. Source of Aggressionism In both of the examples of aggressionism, there are basic natures of human beings that cause their ideologies to take form. With Hitler, he had displayed his overwhelming hate for the Jewish due to his nationalism for Germany. One of the more basic forms of emotion is hate, which was the source of his aggression towards the millions of Jewish who were killed during the Holocaust. The cause of war was due to his inhumane actions towards a specific group of people. Therefore, his unreasonable action for killing people, thinking that he has the right to do so, is one of the most lethal forms of aggression. For Stalin, he had created a country through is dictatorship with the ideology of communism. He had aggressed his beliefs upon his own people with his plans of creating a country that would be seen as a military superpower. He had caused many to die with the famine and plan to boost agriculture. However, the source for this action was due to the fact that he believed in his ideology of Marxism/Leninism. See also Aggression Death drive Homo homini lupus Thoughts for the Times on War and Death References Aggression Peace and conflict studies Philosophical anthropology War Social theories
Aggressionism
Biology
1,249
48,965,973
https://en.wikipedia.org/wiki/Kolonna%20Eterna
Kolonna Eterna (), also known as the Millennium Monument, is a 21st-century monumental column in San Gwann, Malta. The column is an abstract art designed by Paul Vella Critien, a Maltese local artist that achieved his studies and experience in Italy and Australia. The monument is a commemoration of the new (third) millennium as part of an initiative by the San Gwann Local Council. The monument was inaugurated in 2003 by the Prime Minister of Malta Eddie Fenech Adami. The monument came to the national attention because it was largely described as having a phallic appearance. The monument is found in front of Santa Margerita Chapel. History The Kolonna Eterna was the first local monument by Paul Vella Critien to be installed in a public space and officiated on 27 February 2003. Behind the project was the San Gwann Local Council which pushed the idea of decorating public gardens with the inclusion of well established local artists' art. Paul Vella Critien has received art education in Italy and had already experience as an artist career when he lived in Australia. Since its erection the monument had already caught the attention of the public because of its phallic appearance however it is intended to represent an Egyptian obelisk pointing to the open skies as a symbol to eternity. The 6 meters ceramic structure was inaugurated by back then Prime Minister in Office Eddie Fenech Adami, later President of Malta. The monument had a public ceremony that was attended by the Prime Minister himself, the artist, the Local Mayor of San Gwann, local councillors, member of the Nationalist Party, distinct politicians, the general public and local media such as the Times of Malta. Subsequent to the Kolonna Eterna, Paul Vella Critien was invited to create another Monument by the Government of Malta under Prime Minister Lawrence Gonzi. The different but similar phallic appearance is the Colonna Mediterranea in Luqa, Malta. Different from Kolonna Eterna the Luqa Monument had no legal permits for its erection on place, had staunch opposition by the local mayor, it stands on the peripheries of Luqa and not under the responsibility of the local council and had local opposition specifically because of the visit of Pope Benedict XVI to Malta where the pope mobile had to pass by it. However the San Gwann general public has several artistic monument being erected in different places and the Kolonna Eterna largely integrated within the landscape of the area; even if so some local have called for its removal because of its phallic nature. Eventually in 2015 Paul Vella Critien had inaugurated another monument at Naxxar Higher Secondary School which had not similar controversy. On the lower-back-side of Kolonna Eterna it is written: Plaque On the plaque uncovered by Eddie Fenech Adami it is written: See also Colonna Mediterranea Phallic architecture Phallus Landmarks Egyptian obelisk References San Ġwann Monuments and memorials in Malta Phallic monuments Buildings and structures completed in 2003 Roundabouts and traffic circles 2003 establishments in Malta Architectural controversies Monumental columns Phallic symbols Controversies in Malta Buildings and structures celebrating the third millennium
Kolonna Eterna
Engineering
661
10,096,234
https://en.wikipedia.org/wiki/Eyespot%20apparatus
The eyespot apparatus (or stigma) is a photoreceptive organelle found in the flagellate or (motile) cells of green algae and other unicellular photosynthetic organisms such as euglenids. It allows the cells to sense light direction and intensity and respond to it, prompting the organism to either swim towards the light (positive phototaxis), or away from it (negative phototaxis). A related response ("photoshock" or photophobic response) occurs when cells are briefly exposed to high light intensity, causing the cell to stop, briefly swim backwards, then change swimming direction. Eyespot-mediated light perception helps the cells in finding an environment with optimal light conditions for photosynthesis. Eyespots are the simplest and most common "eyes" found in nature, composed of photoreceptors and areas of bright orange-red red pigment granules. Signals relayed from the eyespot photoreceptors result in alteration of the beating pattern of the flagella, generating a phototactic response. Microscopic structure Under the light microscope, eyespots appear as dark, orange-reddish spots or stigmata. They get their color from carotenoid pigments contained in bodies called pigment granules. The photoreceptors are found in the plasma membrane overlaying the pigmented bodies. The eyespot apparatus of Euglena comprises the paraflagellar body connecting the eyespot to the flagellum. In electron microscopy, the eyespot apparatus appears as a highly ordered lamellar structure formed by membranous rods in a helical arrangement. In Chlamydomonas, the eyespot is part of the chloroplast and takes on the appearance of a membranous sandwich structure. It is assembled from chloroplast membranes (outer, inner, and thylakoid membranes) and carotenoid-filled granules overlaid by plasma membrane. The stacks of granules act as a quarter-wave plate, reflecting incoming photons back to the overlying photoreceptors, while shielding the photoreceptors from light coming from other directions. It disassembles during cell division and reforms in the daughter cells in an asymmetric fashion in relation to the cytoskeleton. This asymmetric positioning of the eyespot in the cell is essential for proper phototaxis. Eyespot proteins The most critical eyespot proteins are the photoreceptor proteins that sense light. The photoreceptors found in unicellular organisms fall into two main groups: flavoproteins and retinylidene proteins (rhodopsins). Flavoproteins are characterized by containing flavin molecules as chromophores, whereas retinylidene proteins contain retinal. The photoreceptor protein in Euglena is likely a flavoprotein. In contrast, Chlamydomonas phototaxis is mediated by archaeal-type rhodopsins. Besides photoreceptor proteins, eyespots contain a large number of structural, metabolic and signaling proteins. The eyespot proteome of Chlamydomonas cells consists of roughly 200 different proteins. Photoreception and signal transduction The Euglena photoreceptor was identified as a blue-light-activated adenylyl cyclase. Excitation of this receptor protein results in the formation of cyclic adenosine monophosphate (cAMP) as a second messenger. Chemical signal transduction ultimately triggers changes in flagellar beat patterns and cell movement. The archaeal-type rhodopsins of Chlamydomonas contain an all-trans retinylidene chromatophore which undergoes photoisomerization to a 13-cis isomer. This activates a photoreceptor channel, leading to a change in membrane potential and cellular calcium ion concentration. Photoelectric signal transduction ultimately triggers changes in flagellar strokes and thus cell movement. See also Evolution of the eye Ocelloid References Sensory receptors Signal transduction Pigments Integral membrane proteins Organelles Molecular biology
Eyespot apparatus
Chemistry,Biology
875
57,395,379
https://en.wikipedia.org/wiki/THESEUS
Transient High-Energy Sky and Early Universe Surveyor (THESEUS) is a space telescope mission proposal by the European Space Agency that would study gamma-ray bursts and X-rays for investigating the early universe. If developed, the mission would investigate star formation rates and metallicity evolution, as well as studying the sources and physics of reionization. Overview THESEUS is a mission concept that would monitor transient events in the high-energy Universe across the whole sky and over the entirety of cosmic history. In particular, it expects to make a complete census of gamma-ray bursts (GRBs) from the Universe's first billion years, to help understand the life cycle of the first stars. THESEUS would provide real-time triggers and accurate locations of the sources, which could also be followed up by other space- or ground-based telescopes operating at complementary wavelengths. The concept was selected in May 2018 as a finalist to become the fifth Medium-class mission (M5) of the Cosmic Vision programme by the European Space Agency (ESA). The other finalist was EnVision, a Venus orbiter. The winner, EnVision, was selected in June 2021 for launch in 2031. In November 2023, following a new selection process (2022) and a Phase-0 study (2023), THESEUS was selected by ESA for a new 2.5 year Phase-A study as one of the three candidates M7 missions (together with M-Matisse and Plasma Observatory). The space observatory would study GRBs and X-rays and their association with the explosive death of massive stars, supernova shock break-outs, black hole tidal disruption events, and magnetar flares. This can provide fundamental information on the cosmic star formation rate, the number density and properties of low-mass galaxies, the neutral hydrogen fraction, and the escape fraction of ultraviolet photons from galaxies. Scientific payload The conceptual payload of THESEUS includes: Soft X-ray Imager (SXI), sensitive to 0.3-6 keV is a set of 4 lobster-eye telescope units, covering a total field of view (FOV) of 1 sr with source location accuracy <1-2 arcmin. InfraRed Telescope (IRT), sensitive to 0.7-1.8 μm is a 0.7 m NIR telescope with 15x15 arcmin FOV, for fast response, with both imaging and moderate spectroscopic capabilities (R~400). Mass: 112.6 kg. X-Gamma ray Imaging Spectrometer (XGIS), sensitive to 2 keV-20 MeV, is a set of coded-mask cameras using monolithic X-gamma ray detectors based on bars of silicon diodes coupled with CsI crystal scintillator, granting a 1.5 sr FOV, a source location accuracy of 5 arcmin in 2-30 keV and an unprecedentedly broad energy band. Mass: 37.3 kg. See also Gamma-ray astronomy List of proposed space observatories X-ray astronomy References Cosmic Vision Gamma-ray telescopes X-ray telescopes Space telescopes European Space Agency satellites Classical mythology in popular culture 2010s in science 2020s in science 2037 in science
THESEUS
Astronomy
654
46,416,771
https://en.wikipedia.org/wiki/Elizabeth%20Johnson%20%28pamphleteer%29
Elizabeth Johnson, née Reynolds (8 July 1721 – 14 May 1800), was an English pamphleteer who attempted to win one of the rewards offered by the 1714 Longitude Act passed, which offered monetary rewards for anyone who could find a simple and practical method for the precise determination of a ship's longitude. Johnson and Jane Squire are the only two women known to have made such an attempt as it was not considered an appropriate subject for early modern women especially given its financial, maritime, and government dimensions. Early background She was born to the Rev. Samuel Reynolds (1681–1745) and his wife Theophilia (1688–1756) in Plympton, Devon. Among her siblings was the acclaimed artist Sir Joshua Reynolds, who used her as a model for works which were widely copied in mezzotint. The two would later quarrel over Joshua's lack of piety and over her husband's precarious financial situation and eventual bankruptcy. Other siblings included the author Mary Palmer and painter Frances Reynolds. Publications Johnson's religious pamphlets, beginning with The Explication of the Vision to Ezekiel in 1781, were written anonymously – likely to evade any criticism of women publishing or expressing religious ideas. One critic sarcastically commented on her earlier works in 1783: "As the intentions of this writer are pious, his facilities evidently disordered, and his lucubrations absolutely unintelligible, these three pamphlets must be exempted from criticism." William Johnson Cory later revealed the true identity of the pamphlets' author in a handwritten inscription on one of the Bodleian Library's copies of the Ezekiel pamphlet: "This strange book was written by my great-grandmother Mrs. Johnson, sister of Sir Joshua Reynolds. When extremely poor she posted up to Oxford to get it published, being a real enthusiast." Longitude The Astronomy and Geography of the Created World, her fourth pamphlet published in 1785, included a short reference to longitude. The pamphlet ended with the claim "that if the palm for finding the longitude, is not given to the author of the Explanation of the Vision to Ezikiel it will never be given to another". The modern attribution of the Ezekiel pamphlet to Johnson has only recently revealed that the author of the 1785 work was a rare female longitude-seeker, as she even remained anonymous when sending it to the Board of Longitude in 1786 in the hope of a reward. She was unsuccessful, and the pamphlet and letter were later catalogued by the Astronomer Royal George Airy in a volume of Board of Longitude correspondence which he entitled Irrational Astronomical Theories in 1858. However, it was not the only early modern pamphlet to address both religion and longitude. Elizabeth Johnson died in Great Torrington, Devon in 1800. References 1721 births 1800 deaths 18th-century English astronomers Women astronomers People from Plympton English pamphleteers
Elizabeth Johnson (pamphleteer)
Astronomy
571
3,549,886
https://en.wikipedia.org/wiki/Azlocillin
Azlocillin is an acyl ampicillin antibiotic with an extended spectrum of activity and greater in vitro potency than the carboxy penicillins. Azlocillin is similar to mezlocillin and piperacillin. It demonstrates antibacterial activity against a broad spectrum of bacteria, including Pseudomonas aeruginosa and, in contrast to most cephalosporins, exhibits activity against enterococci. Spectrum of bacterial susceptibility Azlocillin is considered a broad spectrum antibiotic and can be used against a number of Gram positive and Gram negative bacteria. The following represents MIC susceptibility data for a few medically significant organisms. Escherichia coli 1 μg/mL – 32 μg/mL Haemophilus spp. 0.03 μg/mL – 2 μg/mL Pseudomonas aeruginosa 4 μg/mL – 6.25 μg/mL Synthesis An interesting alternative synthesis of azlocillin involves activation of the substituted phenylglycine analogue 1 with 1,3-dimethyl-2-chloro-1-imidazolinium chloride (2) and then condensation with 6-APA. See also Methicillin References Penicillins Enantiopure drugs Imidazolidinones
Azlocillin
Chemistry
282
26,902,602
https://en.wikipedia.org/wiki/Advance%20ratio
The propeller advance ratio or coefficient is a dimensionless number used in aeronautics and marine hydrodynamics to describe the relationship between the speed at which a vehicle (like an airplane or a boat) is moving forward and the speed at which its propeller is turning. It helps in understanding the efficiency of the propeller at different speeds and is particularly useful in the design and analysis of propeller-driven vehicles.It is the ratio of the freestream fluid speed to the propeller, rotor, or cyclorotor tip speed. When a propeller-driven vehicle is moving at high speed relative to the fluid, or the propeller is rotating slowly, the advance ratio of its propeller(s) is a high number. When the vehicle is moving at low speed or the propeller is rotating at high speed, the advance ratio is a low number. The advance ratio is a useful non-dimensional quantity in helicopter and propeller theory, since propellers and rotors will experience the same angle of attack on every blade airfoil section at the same advance ratio regardless of actual forward speed. It is the inverse of the tip speed ratio used for wind turbines. Mathematical definition Propellers The advance ratio J is a non-dimensional term given by: where {| border="0" |- | Va || is the freestream fluid velocity in m/s, typically the true airspeed of the aircraft or the water speed of the vessel |- | n || is the rotational speed of the propeller in revolutions per second |- | D || is the propeller's diameter in m |} Helicopter rotors and cyclorotors The advance ratio μ is defined as: where {| border="0" |- | V∞ || is the free-stream fluid velocity in m/s, typically the true airspeed of the helicopter |- | Ω || is the rotor rotational speed in |- | r || is the rotor radius in m |} Significance Propellers Low Advance Ratio (J < 1): When the advance ratio is low, the vehicle is moving forward slowly relative to the propeller speed. This usually happens at low speeds or when the propeller is turning very fast. High Advance Ratio (J > 1): When the advance ratio is high, the vehicle is moving forward quickly compared to the propeller's rotational speed. This typically occurs at higher speeds or when the propeller is turning more slowly. The advance ratio is critical for determining the efficiency of a propeller. At different advance ratios, the propeller may produce more or less thrust. Engineers use this ratio to optimize the design of the propeller and the engine, ensuring that the vehicle operates efficiently at its intended cruising speed, see propeller theory. For instance, an airplane's propeller needs to be efficient both during takeoff (where the advance ratio is low) and at cruising altitude (where the advance ratio is higher). Similarly, a boat's propeller design will vary depending on whether it's designed for slow-speed maneuvering or high-speed travel. Helicopters Single rotor helicopters are limited in forward speed by a combination of sonic tip speed and retreating blade stall. As the advance ratio increases, the relative velocity experienced by the retreating blade decreases so that the tip of the blade experiences zero velocity at an advance ratio of one. Helicopter rotors pitch the retreating blade to a higher angle of attack to maintain lift as the relative velocity decreases. At a sufficiently high advance ratio, the blade will reach the stalling angle of attack and experience retreating blade stall. Specially designed airfoils can increase the operating advance ratio by utilizing high lift coefficient airfoils. Currently, single rotor helicopters are practically limited to advance ratios less than 0.7. Relation to tip speed ratio The advance ratio is the inverse of the tip speed ratio, , used in wind turbine aerodynamics: . In operation, propellers and rotors are generally spinning, but could be immersed in a stationary fluid. Thus the tip speed is placed in the denominator so the advance ratio increases from zero to a positive non-infinite value as the velocity increases. Wind turbines use the reciprocal to prevent infinite values since they start stationary in a moving fluid. See also Axial fan design Retreating blade stall Helicopter rotor Slowed rotor Aircraft propeller Propeller Theory Notes External links Propeller Aircraft Performance and The Bootstrap Approach MIT Thermodynamics 11.7 Performance of propellers Aerospace engineering
Advance ratio
Engineering
879
65,736,999
https://en.wikipedia.org/wiki/List%20of%20sulfonamides
This is a list of sulfonamides used in medicine. Antimicrobials Short-acting Sulfacetamide Sulfadiazine Sulfadimidine Sulfafurazole (sulfisoxazole) Sulfisomidine (sulfaisodimidine) Sulfaguanidine Intermediate-acting Sulfamethoxazole Sulfamoxole Sulfanitran Long-acting Sulfadimethoxine Sulfamethoxypyridazine Sulfametoxydiazine Ultra long-acting Sulfadoxine Sulfametopyrazine Terephtyl Sulfonylureas (anti-diabetic agents) Acetohexamide Carbutamide Chlorpropamide Glibenclamide (glyburide) Glibornuride Gliclazide Glyclopyramide Glimepiride Glipizide Gliquidone Glisoxepide Glicaramide Tolazamide Tolbutamide Diuretics Acetazolamide Bumetanide Chlorthalidone Clopamide Furosemide Hydrochlorothiazide Indapamide Mefruside Metolazone Xipamide Methazolamide Anticonvulsants Ethoxzolamide Sultiame Zonisamide Dermatologicals Mafenide Antiretrovirals Amprenavir (HIV protease inhibitor) Darunavir (HIV protease inhibitor) Delavirdine (non-nucleoside reverse transcriptase inhibitor) Fosamprenavir (HIV protease inhibitor) Tipranavir (HIV protease inhibitor) Hepatitis C antivirals Asunaprevir (NS3/4A protease inhibitor) Beclabuvir (NS5B RNA polymerase inhibitor) Dasabuvir (NS5B RNA polymerase inhibitor) Grazoprevir (NS3/4A protease inhibitor) Paritaprevir (NS3/4A protease inhibitor) Simeprevir (NS3/4A protease inhibitor) Stimulants Azabon NSAIDs Apricoxib (COX-2 inhibitor) Celecoxib (COX-2 inhibitor) Parecoxib (COX-2 inhibitor) Cardiac and Vasoactive Medications Bosentan (endothelin receptor antagonist) Dofetilide (class III antiarrhythmic) Dronedarone (class III antiarrhythmic) Ibutilide (class III antiarrhythmic) Sotalol (β blocker) Tamsulosin (α blocker) Udenafil (PDE5 inhibitor) Others Brinzolamide (carbonic anhydrase inhibitor for glaucoma) Dorzolamide (anti-glaucoma carbonic anhydrase inhibitor) Famotidine (histamine H2 receptor antagonist) Probenecid (uricosuric) Sulfasalazine (anti-inflammatory agent and a DMARD) Sumatriptan (antimigraine triptan) References External links List of sulfonamides Author of The Demon Under the Microscope, a history of the discovery of the sulfa drugs A History of the Fight Against Tuberculosis in Canada (Chemotherapy) Presentation speech, Nobel Prize in Physiology and Medicine, 1939 The History of WW II Medicine "Five Medical Miracles of the Sulfa Drugs". Popular Science, June 1942, pp. 73–78. A history of antibiotics Disulfiram-like drugs Hepatotoxins Sulfonamides
List of sulfonamides
Chemistry
757
25,162,434
https://en.wikipedia.org/wiki/Tertiary%20ideal
In mathematics, a tertiary ideal is a two-sided ideal in a perhaps noncommutative ring that cannot be expressed as a nontrivial intersection of a right fractional ideal with another ideal. Tertiary ideals generalize primary ideals to the case of noncommutative rings. Although primary decompositions do not exist in general for ideals in noncommutative rings, tertiary decompositions do, at least if the ring is Noetherian. Every primary ideal is tertiary. Tertiary ideals and primary ideals coincide for commutative rings. To any (two-sided) ideal, a tertiary ideal can be associated called the tertiary radical, defined as Then t(I) always contains I. If R is a (not necessarily commutative) Noetherian ring and I a right ideal in R, then I has a unique irredundant decomposition into tertiary ideals . See also Primary ideal Lasker–Noether theorem References Tertiary ideal, Encyclopedia of Mathematics, Springer Online Reference Works. Abstract algebra
Tertiary ideal
Mathematics
203
15,443,272
https://en.wikipedia.org/wiki/Epoxy%20granite
Epoxy granite, also known as synthetic granite, is a polymer matrix composite and is a mixture of epoxy and granite commonly used as an alternative material for machine tool bases. Epoxy granite is used instead of cast iron and steel for improved vibration damping, longer tool life, and lower assembly cost, and thus better properties for stabilizing and housing machines. Machine tool base Machine tools and other high-precision machines rely upon high stiffness, long-term stability, and excellent damping characteristics of the base material for their static and dynamic performance. The most widely used materials for these structures are cast iron, welded steel fabrications, and natural granite. Due to the lack of long-term stability and very poor damping properties, steel fabricated structures are seldom used where high precision is required. Good-quality cast iron that is stress-relieved and annealed will give the structure dimensional stability, and can be cast into complex shapes, but needs an expensive machining process to form precision surfaces after casting. Natural granite has a higher damping capacity than cast iron, but similarly to cast iron can be labor-intensive and expensive to machine and finish. The traditional market for epoxy granite is to replace iron and steel in these applications. Process Precision granite castings are produced by mixing granite aggregates (which are crushed, washed, and dried) with an epoxy resin system at ambient temperature (i.e., cold curing process). Quartz aggregate filler can also be used in the composition. Vibratory compaction during the molding process tightly packs the aggregate together. Mechanical and thermo-mechanical properties can be improved further if fiber is used as well as the granite. Other resins in addition to the epoxy may also be used instead of fibers to improve properties such as water absorption. If porosity is controlled, damping effects can be improved further. Threaded inserts, steel plates, and coolant pipes can be cast-in during the casting process. To achieve an even higher degree of versatility, linear rails, ground slide-ways, and motor mounts can be replicated or grouted-in, therefore eliminating the need for any post-cast machining. Other definitions Epoxy resins and granite, specifically waste granite dust, may be used in other applications such as floor coatings. Waste granite filings are produced in the mining industry, and the low density means this can be easily dispersed by winds and thus distributed in the environment. Research is being done on innovative solutions such as using waste granite powders in epoxy resins and designing binders for coatings based on this. Advantages The vibration damping of epoxy granite is often claimed to be superior to that of steel or cast iron. It is also well known that iron and steel and alloys corrode or rust, whereas epoxy is often used to prevent corrosion. So, the corrosion and general chemical resistance of epoxy granite to most common solvents, acids, alkalis, and cutting fluids is superior to steel and alloys and does not require constant painting. Epoxy granite material has an internal damping factor up to ten times better than cast iron, up to three times better than natural granite, and up to thirty times better than steel fabricated structure. The method of casting compared to steel allows easier inclusion of inserts etc. and thus reduced machining of the finished casting and reduced assembly time by incorporating multiple components into one casting. Polymer cast resins use very little energy to produce, and the casting process is done at room temperature. References Further reading Terry Capuano. "Polymer Castings take on metals". Machine Design 2006. COMPOSITE MATERIALS:ENGINEERING AND SCIENCE Machine tools Composite materials Artificial stone Epoxides
Epoxy granite
Physics,Engineering
767
5,023,355
https://en.wikipedia.org/wiki/Brian%20Henderson-Sellers
Brian Henderson-Sellers (born January 1951) is an English-Australian computer scientist. He is a Professor of Information Systems at the University of Technology Sydney. He is also Director of the Centre for Object Technology and Applications at University of Technology Sydney. Education Henderson-Sellers has received a BSc and A.R.C.S. in Mathematics from the Imperial College London in 1972, a MSc from the Reading University in 1973, and a PhD from Leicester University in 1976. Career From 1976 to 1983, he was a lecturer in the Department of Civil Engineering at the University of Salford in England and from 1983 at the department of mathematics. In 1988, he emigrated to Australia and became associate professor in the school of Information Systems at the University of New South Wales. In 1990, he founded the Object-Oriented Special Interest Group of the Australian Computer Society. He is co-founder and leader of the international OPEN Consortium. Currently he is professor of Information Systems at the University of Technology Sydney. He is also Director of the Centre for Object Technology and Applications at University of Technology Sydney. He is also editor of the International Journal of Agent-Oriented Software Engineering and on the editorial board of the Journal of Object Technology and Software and Systems Modelling and was for many years the Regional Editor of Object-Oriented Systems, a member of the editorial board of Object Magazine/Component Strategies and Object Expert. And he is associate editor of the Enterprise Modelling and Information Systems Architectures journal. Also he is a frequent, invited speaker at international OT conferences. In July 2001, Henderson-Sellers was awarded a Doctor of Science (DSc) from the University of London for his research contributions in object-oriented methodologies. Work His research interests are object-oriented analysis and design, object-oriented metrics, agent-oriented methodologies, and the migration of organizations to object technology. Object-oriented Process, Environment and Notation Object-oriented Process, Environment and Notation (OPEN) is a third-generation, public domain, fully object-oriented methodology and process. It encapsulates business issues, quality issues, modelling issues and reuse issues within its end-to-end lifecycle support for software development using the object-oriented paradigm. OPEN provides flexibility. Its metamodel-based framework can be tailored to individual domains or projects taking into account personal skills, organizational culture and requirements peculiar to each industry domain". Publications Henderson-Sellers is author of numerous papers including thirty-one books and is well known for his work in object-oriented and agent-oriented software development methodologies and situational method engineering (MOSES, COMMA and OPEN) and in OO metrics. A selection: 1992. Book of object-oriented knowledge : object-oriented analysis, design, and implementation : a new approach to software engineering. 1994. Booktwo of object-oriented knowledge : the working object : object-oriented software engineering : methods and management. With J.M. Edwards. 1996. Object-oriented metrics : measures of complexity 1997. OPEN process specification. With Ian Graham and Houman Younessi. 1998. OPEN Modeling Language (OML) reference manual. With Donald Firesmith, Ian Graham, foreword by Meilir Page-Jones. 1998. Object-oriented metamethods. With A. Bulthuis. 1998. OPEN toolbox of techniques. With Anthony Simons and Houman Younessi. 2000. Open modeling with UML. With Bhuvan Unhelkar. 2005. Agent-oriented methodologies. With Paolo Giorgini (ed) 2008. Metamodelling for software engineering. With César González-Pérez. 2008. Situational method engineering : fundamentals and experiences. Edited with Jolita Raylte and Sjaak Brinkkemper. New York : Springer, 2008. 2012. On the Mathematics of Modelling, Metamodelling, Ontologies and Modelling Languages, Springer, 2012. References External links Home Page of Brian Henderson-Sellers Object-oriented Process, Environment and Notation Homepage 1950 births Living people Alumni of Imperial College London Alumni of the University of Leicester Alumni of the University of Reading Academics of the University of Salford Academic staff of the University of Technology Sydney English computer scientists Enterprise modelling experts Information systems researchers Academic staff of the University of New South Wales English emigrants to Australia Australian computer scientists
Brian Henderson-Sellers
Technology
873
22,615,944
https://en.wikipedia.org/wiki/Bow%20tie%20%28biology%29
In the biological sciences, the term bow tie (so called for its shape) is a recent concept that tries to grasp the essence of some operational and functional structures observed in biological organisms and other kinds of complex and self-organizing systems. In general, bow tie architectures refer to ordered and recurrent structures that often underlie complex technological or biological systems, and that are capable of conferring them a balance among efficiency, robustness and evolvability. In other words, bow ties are able to take into account a great diversity of inputs (fanning in to the knot), showing a much smaller diversity in the protocols and processes (the knot) able to elaborate these inputs, and finally an extremely heterogeneous diversity of outputs (fanning out of the bowtie). These architectures thus manage a wide range of inputs through a core (knot) constituted by a limited number of elements. In such structures, inputs are conveyed into a sort of funnel, towards a "synthesis" core, where they can be duly organized, processed and managed by means of protocols , and from where, in turn, a variety of outputs, or responses, is propagated. According to Csete and Doyle, bow ties are able to optimally organize fluxes of mass, energy, signals in an overall structure that forcedly deals with a highly fluctuating and "sloppy" environment. In a biological perspective, a bow tie manages a large fan in of stimuli (input), it accounts for a "compressed" core, and it expresses again a large fan out of possible phenotypes, metabolite products, or –more generally – reusable modules. Bow tie architectures have been observed in the structural organization at different scales of living and evolving organisms (e.g. bacterial metabolism network) as well as in technological and dynamical systems (e.g. the Internet). Bow ties seem to be able to mediate trade-offs among robustness and efficiency, at the same time assuring to the system the capability to evolve. Conversely, the same efficient architecture may be prone and vulnerable to fragilities due to specific changes, perturbations, and focused attacks directed against the core set of modules and protocols. The bow tie architecture is one of several different structures and functioning principles that living matter employs to achieve self-organization and efficient exploitation of available resources. References Self-organization
Bow tie (biology)
Mathematics
491
17,879,542
https://en.wikipedia.org/wiki/Andreaea%20frigida
Andreaea frigida, commonly known as icy rock moss, is a species of moss endemic to Europe. Distribution and habitat Endemic to the mountains of Europe between 37 degrees north and 67 degrees north, A. frigida can be found in Andorra, Austria, Belgium, the Czech Republic, France (mainland France and Corsica), Germany, Hungary, Italy, Luxembourg, Monaco, the Netherlands, Norway, Poland, Portugal, Romania, Slovakia, Spain, Switzerland, Ukraine, and the United Kingdom. It grows in humid, rocky areas in alpine or subalpine habitats at altitudes of above sea level. In the UK its occurrence is widespread in the Cairngorms National Park, where it is typically found on rocks in burns fed by snow patches, but it is not found elsewhere except at a single site in the Lake District of England. The earliest records for the UK date to 1854, (although its existence was not formally recognised until 1988), and it is classified as "Vulnerable". The greatest threat to its continuing existence is assumed to be global warming. See also Endemic Scottish moss species: Bryoerythrophyllum caledonicum Bryum dixonii Pohlia scotica Flora of Scotland References Andreaeaceae Flora of Scotland Plants described in 1834 Lithophytes
Andreaea frigida
Biology
263
4,403,842
https://en.wikipedia.org/wiki/Profunctor
In category theory, a branch of mathematics, profunctors are a generalization of relations and also of bimodules. Definition A profunctor (also named distributor by the French school and module by the Sydney school) from a category to a category , written , is defined to be a functor where denotes the opposite category of and denotes the category of sets. Given morphisms respectively in and an element , we write to denote the actions. Using the cartesian closure of , the category of small categories, the profunctor can be seen as a functor where denotes the category of presheaves over . A correspondence from to is a profunctor . Profunctors as categories An equivalent definition of a profunctor is a category whose objects are the disjoint union of the objects of and the objects of , and whose morphisms are the morphisms of and the morphisms of , plus zero or more additional morphisms from objects of to objects of . The sets in the formal definition above are the hom-sets between objects of and objects of . (These are also known as het-sets, since the corresponding morphisms can be called heteromorphisms.) The previous definition can be recovered by the restriction of the hom-functor to . This also makes it clear that a profunctor can be thought of as a relation between the objects of and the objects of , where each member of the relation is associated with a set of morphisms. A functor is a special case of a profunctor in the same way that a function is a special case of a relation. Composition of profunctors The composite of two profunctors and is given by where is the left Kan extension of the functor along the Yoneda functor of (which to every object of associates the functor ). It can be shown that where is the least equivalence relation such that whenever there exists a morphism in such that and . Equivalently, profunctor composition can be written using a coend Bicategory of profunctors Composition of profunctors is associative only up to isomorphism (because the product is not strictly associative in Set). The best one can hope is therefore to build a bicategory Prof whose 0-cells are small categories, 1-cells between two small categories are the profunctors between those categories, 2-cells between two profunctors are the natural transformations between those profunctors. Properties Lifting functors to profunctors A functor can be seen as a profunctor by postcomposing with the Yoneda functor: . It can be shown that such a profunctor has a right adjoint. Moreover, this is a characterization: a profunctor has a right adjoint if and only if factors through the Cauchy completion of , i.e. there exists a functor such that . See also Anafunctor References Functors
Profunctor
Mathematics
619
71,260,257
https://en.wikipedia.org/wiki/AP5S1
AP-5 complex subunit sigma (AP5S1) is a protein that in humans is encoded by the AP5S1 gene. Function The protein encoded by this gene is the small subunit of the AP5 adaptor complex. Variants in this gene have not been implicated in any disease but damaging variants in AP5Z1, the gene encoding one of the large subunits in this complex, are associated with SPG48, a type of hereditary spastic paraplegia. In addition, damaging variants in the genes encoding two proteins that stably associate with the AP-5 adaptor complex are also associated with forms of hereditary spastic paraplegia - SPG11 with the disease of the same name and ZFYVE26 with SPG15. References
AP5S1
Chemistry
156
68,897,666
https://en.wikipedia.org/wiki/Icd-II%20ncRNA%20motif
The icd-II non-coding RNA (ncRNA) is an RNA motif proposed as a Strong Riboswitch Candidate (SRC). Icd-II ncRNA has been recognized by a comparative sequence analysis in GC-rich intergenic regions (IGR) of bacteria, using a pipeline call Discovery of Intergenic Motifs PipeLine (DIMPL). Icd-II ncRNA has been located upstream of the icd gene, which codes for an NADP+-dependent isocitrate dehydrogenase (IDH) enzyme. IDH is part of the citric acid cycle, and thus it participates in managing the carbon flux through this energy metabolism pathway. Icd-II ncRNA has been found in bacteria of the class beta proteobacteria, particularly in Polynucleobacter genus. Icd-II RNA secondary structure consists of a three-stem junction, where the ribosome binding site (RBS) of the adjacent open reading frame (ORF) is predicted to be involved in the first base-paired stem. It has been proposed that icd-II ncRNA can function as a riboswitch that regulates translation initiation of its associate ORF. References External links Cis-regulatory RNA elements Riboswitch
Icd-II ncRNA motif
Chemistry
262
6,952,874
https://en.wikipedia.org/wiki/National%20Vaccine%20Injury%20Compensation%20Program
The Office of Special Masters of the U.S. Court of Federal Claims, popularly known as "vaccine court", administers a no-fault system for litigating vaccine injury claims. These claims against vaccine manufacturers cannot normally be filed in state or federal civil courts, but instead must be heard in the U.S. Court of Federal Claims, sitting without a jury. The National Vaccine Injury Compensation Program (VICP or NVICP) was established by the 1986 National Childhood Vaccine Injury Act (NCVIA), passed by the United States Congress in response to a threat to the vaccine supply due to a 1980s scare over the DPT vaccine. Despite the belief of most public health officials that claims of side effects were unfounded, large jury awards had been given to some plaintiffs, most DPT vaccine makers had ceased production, and officials feared the loss of herd immunity. Between its inception in 1986 and May 2023, it has awarded a total of $4.6 billion, with the average award amount between 2006 and 2020 being $450,000, and the award rate (which varies by vaccine) being 1.2 awards per million doses administered. The Health Resources and Services Administration reported in July 2022 that "approximately 60 percent of all compensation awarded by the VICP comes as result of a negotiated settlement between the parties in which HHS has not concluded, based upon review of the evidence, that the alleged vaccine(s) caused the alleged injury". Cases are settled to minimize the risk of loss for both parties, to minimize the time and expense of litigation, and to resolve petitions quickly. National Childhood Vaccine Injury Act The U.S. Department of Health and Human Services set up the National Vaccine Injury Compensation Program (VICP) in 1988 to compensate individuals and families of individuals injured by covered childhood vaccines. The VICP was adopted in response to concerns over the pertussis portion of the DPT vaccine. Several U.S. lawsuits against vaccine makers won substantial awards. Most makers ceased production, and the last remaining major manufacturer threatened to do so. The VICP uses a no-fault system for resolving vaccine injury claims. Compensation covers medical and legal expenses, loss of future earning capacity, and up to $250,000 for pain and suffering; a death benefit of up to $250,000 is available. If certain minimal requirements are met, legal expenses are compensated even for unsuccessful claims. Since 1988, the program has been funded by an excise tax of 75 cents on every purchased dose of covered vaccine. To win an award, a claimant must have experienced an injury that is named as a vaccine injury in a table included in the law within the required time period or show a causal connection. The burden of proof is the civil law preponderance-of-the-evidence standard, in other words a showing that causation was more likely than not. Denied claims can be pursued in civil courts, though this is rare. The VICP covers all vaccines listed on the Vaccine Injury Table maintained by the Secretary of Health and Human Services; in 2007 the list included vaccines against diphtheria, tetanus, pertussis (whooping cough), measles, mumps, rubella (German measles), polio, hepatitis B, varicella (chicken pox), Haemophilus influenzae type b, rotavirus, and pneumonia. From 1988 until January 8, 2008, 5,263 claims relating to autism, and 2,865 non-autism claims, were made to the VICP. Of these claims, 925 (see previous rulings), were compensated, with 1,158 non-autism and 350 autism claims dismissed, and one autism-like claim compensated; awards (including attorney's fees) totaled $847 million. The VICP also applies to claims for injuries suffered before 1988; there were 4,264 of these claims of which 1,189 were compensated with awards totaling $903 million. As of October 2019, $4.2 billion in compensation (not including attorneys fees and costs) has been awarded. , filing a claim with the Court of Federal Claims requires a $402.00 filing fee, which can be waived for those unable to pay. Medical records such as prenatal, birth, pre-vaccination, vaccination, and post-vaccination records are strongly suggested, as medical review and claim processing may be delayed without them. Because this is a legal process most people use a lawyer, though this is not required. By 1999 the average claim took two years to resolve, and 42% of resolved claims were awarded compensation, as compared with 23% for medical malpractice claims through the tort system. There is a three-year statute of limitations for filing a claim, timed from the first manifestation of the medical problem. Autism claims More than 5,300 petitions alleging autism caused by vaccines have been filed in the vaccine court. In 2002, the court instituted the Omnibus Autism Proceeding in which plaintiffs were allowed to proceed with the three cases they considered to be the strongest before a panel of special masters. In each of the cases, the panel found that the plaintiffs had failed to demonstrate a causal effect between the MMR vaccine and autism. Following this determination, the vaccine court has routinely dismissed such suits, finding no causal effect between the MMR vaccine and autism. Many studies have failed to conclude that there is a causal link between autism spectrum disorders and vaccines, and the current scientific consensus is that routine childhood vaccines are not linked to the development of autism. Several claimants have attempted to bypass the VICP process with claims that thimerosal in vaccines had caused autism, but these were ultimately not successful. They have demanded medical monitoring for vaccinated children who do not show signs of autism and have filed class-action suits on behalf of parents. In March 2006, the U.S. Fifth Circuit Court of Appeals ruled that plaintiffs suing three manufacturers of thimerosal could bypass the vaccine court and litigate in either state or federal court using the ordinary channels for recovery in tort. This was the first instance where a federal appeals court has held that a suit of this nature may bypass the vaccine court. The argument was that thimerosal is a preservative, not a vaccine, so it does not fall under the provisions of the vaccine act. The claims that vaccines (or thimerosal in vaccines) caused autism eventually had to be filed in the vaccine court as part of the Omnibus Autism Proceeding. The scientific consensus, developed from substantial medical and scientific research, states that there is no evidence supporting these claims, and the rate of autism continues to climb despite elimination of thimerosal from most routine early childhood vaccines. Major scientific and medical bodies such as the Institute of Medicine and World Health Organization, as well as governmental agencies such as the Food and Drug Administration and the CDC reject any role for thimerosal in autism or other neurodevelopmental disorders. Compensation awards As of May 2023, nearly $4.6 billion in compensation and $450 million in attorneys’ fees have been awarded. The following table shows the awards by main classes of vaccines made to victims in the years 2006-2017. This shows that on average 1.2 awards were made per million vaccine doses. It also shows that multiple vaccines such as MMR do not have an abnormal award rate. * This covers the vaccinations known by the abbreviations DT, DTaP, DTaP-HIB, DTaP-IPV, DTap-IPV-HIB, Td, Tdap Attorneys fees and costs Self representation is permitted, although the NVICP also pays attorneys fees out of the fund, separate from any compensation given to the petitioner. This is "to ensure that vaccine claimants have readily available a competent bar to prosecute their claims". Homeland Security Act The Homeland Security Act of 2002 provides another exception to the exclusive jurisdiction of the vaccine court. If smallpox vaccine were to be widely administered by public health authorities in response to a terrorist or other biological warfare attack, persons administering or producing the vaccine would be deemed federal employees and claims would be subject to the Federal Tort Claims Act, in which case claimants would sue the U.S. Government in the U.S. district courts, and would have the burden of proving the defendants' negligence, a much more difficult standard. Petitioner's burden of proof Notably, the Health Resources and Services Administration reported in July 2022 that "approximately 60 percent of all compensation awarded by the VICP comes as result of a negotiated settlement between the parties in which HHS has not concluded, based upon review of the evidence, that the alleged vaccine(s) caused the alleged injury". Cases are settled to minimize the risk of loss for both parties, to minimize the time and expense of litigation, and to resolve petitions quickly. Of the remaining cases, in the vaccine court, as in civil tort cases, the burden of proof is a preponderance of evidence, but while in tort cases this is met by expert testimony based on epidemiology or rigorous scientific studies showing both general and specific causation, in the vaccine court, the burden is met with a three prong test established in Althen, a 2005 United States Court of Appeals for the Federal Circuit ruling. Althen held that an award should be granted if a petitioner either establishes a "Tabled Injury" or proves "causation in fact" by proving three prongs: a medical theory causally connecting the vaccination and the injury; a logical sequence of cause and effect showing that the vaccination was the reason for the injury; and a showing of a proximate temporal relationship between vaccination and injury. This ruling held that tetanus vaccine caused a particular case of optic neuritis, even though no scientific evidence supported the petitioner's claim. Other rulings have allowed petitioners to gain awards for claims that the MMR vaccine causes fibromyalgia, that the Hib vaccine causes transverse myelitis, and that the hepatitis B vaccine causes Guillain–Barré syndrome, chronic demyelinating polyneuropathy, and multiple sclerosis. In the most extreme of these cases, a 2006 petitioner successfully claimed that a hepatitis B vaccine caused her multiple sclerosis despite several studies showing that the vaccine neither causes nor worsens the disease, and despite a conclusion by the Institute of Medicine that evidence favors rejection of a causal relationship. In 2008, the federal government settled a case brought to the vaccine court by the family of Hannah Poling, a girl who developed autistic-like symptoms after receiving a series of vaccines in a single day. The vaccines given were DTaP, Hib, MMR, varicella, and inactivated polio. Poling was diagnosed months later with encephalopathy (brain disease) caused by a mitochondrial enzyme deficit, a mitochondrial disorder; it is not unusual for children with such deficits to develop neurologic signs between their first and second years. There is little scientific research in the area: no scientific studies show whether childhood vaccines can cause or contribute to mitochondrial disease, and there is no scientific evidence that vaccinations damage the brains of children with mitochondrial disorders. Although many parents view this ruling as confirming that vaccines cause regressive autism, most children with autism do not seem to have mitochondrial disorders, and the case was settled without proof of causation. With the commencement of hearings in the case of Cedillo v. Secretary of Health and Human Services (Case #98-916V), the argument over whether autism is a vaccine injury moved into the vaccine court. A panel of three special masters began hearing the first cases of the historic Omnibus Autism Proceedings in June 2007. There were six test cases in all, and the entire record of the cases is publicly available. The lead petitioners, the parents of Michelle Cedillo, claimed that Michelle's autism was caused by a vaccine. Theresa and Michael Cedillo contended that thimerosal seriously weakened Michelle's immune system and prevented her body from clearing the measles virus after her vaccination at the age of fifteen months. At the outset Special Master George Hastings, Jr. said "Clearly the story of Michelle's life is a tragic one," while pledging to listen carefully to the evidence. On February 12, 2009, the court ruled in three test cases that the combination of the MMR vaccine and thimerosal-containing vaccines were not to blame for autism. Hastings concluded in his decision, "Unfortunately, the Cedillos have been misled by physicians who are guilty, in my view, of gross medical misjudgment." The ruling was appealed to the U.S. Court of Appeals, and upheld. On March 13, 2010, the court ruled in three test cases that thimerosal-containing vaccines do not cause autism. Special Master Hastings concluded, "The overall weight of the evidence is overwhelmingly contrary to the petitioners' causation theories." See also Vaccine Damage Payment National Childhood Vaccine Injury Act Countermeasures Injury Compensation Program References External links National Vaccine Injury Compensation Program (VICP) Vaccine Program / Office of Special Masters United States federal health legislation Vaccination-related organizations Drug safety Court, Vaccine United States Court of Federal Claims Vaccination in the United States
National Vaccine Injury Compensation Program
Chemistry,Biology
2,748
30,254,980
https://en.wikipedia.org/wiki/Hygrophorus%20purpurascens
Hygrophorus purpurascens, commonly known as the purple-red waxy cap, is a species of agaric fungus in the family Hygrophoraceae. Its cap has a pink background color with streaks of purplish red overlaid, and mature gills have red spots. Taxonomy The species was originally described as Agaricus purpurascens by Johannes Baptista von Albertini and Lewis David de Schweinitz in 1805. Elias Fries transferred it to the genus Hygrophorus in 1838. Paul Kummer's 1871 Limacium purpurascens is a synonym. The specific epithet purpurascens means "becoming purple". It is commonly known as the "veiled purple hygrophorus". Description The cap is convex to flattened, measuring in diameter. The color is pinkish red in the center to white, often irregularly tinged with pink. The flesh is white. The gills have a decurrent attachment to the stipe and are white to pale pink spotted with pinkish or purplish red. The stipe measures long by wide, and is more or less the same color as the cap, often spotted with dark red. Fruit bodies are edible. The spore print is white. Spores are thin-walled, elliptical, smooth, and measure 5.5–8 by 3–4.5 μm. The basidia (spore-bearing cells) are narrowly club-shaped, thin-walled, four-spored, and measure 40–56 by 5–8 μm. Hygrophorus russula is similar in appearance to H. purpurascens, but the former species can be distinguished by its tendency to bruise yellow, and its association with hardwood trees. Habitat and distribution The fruit bodies of Hygrophorus purpurascens grow on the ground in clusters or groups under conifer trees. A snowbank mushroom, it is commonly found fruiting near the edges of snowbanks, or shortly after snowmelt. See also List of Hygrophorus species References External links Fungi described in 1805 Fungi of North America purpurascens Fungus species
Hygrophorus purpurascens
Biology
458
28,257,381
https://en.wikipedia.org/wiki/Buckminster%20Fuller%20Challenge
The Buckminster Fuller Challenge is an annual international design competition that awards $100,000 to the most comprehensive solution to a pressing global problem. The Challenge was launched in 2007 and is a program of The Buckminster Fuller Institute. The competition, open to designers, artists, architects, students, environmentalists, and organizations world-wide, has been dubbed "Socially-Responsible Design's Highest Award" by Metropolis Magazine. According to the Buckminster Fuller Challenge website: "Winning solutions are regionally specific yet globally applicable and present a truly comprehensive, anticipatory, integrated approach to solving the world's complex problems." Furthermore, the criteria of the Challenge calls not for a stand-alone solution, but an integrated strategy that addresses social, environmental, economic and cultural issues. This is aligned with the design approach of Buckminster Fuller, which he referred to as "comprehensive anticipatory design science". Winners of the Buckminster Fuller Challenge include John Todd (2008), MIT's Smart Cities Group (2009), , Allan Savory and the Africa Center for Holistic management (2010), Blue Ventures (2011), the Living Building Challenge (2012), and GreenWave (2015). Each year's winner is ultimately decided by an international jury of renowned whole systems thinkers and practitioners of sustainability. Former jury members include Jose Zaglul, Alan Kay, Mitchell Joachim, Adam Bly, Jamais Cascio, Nicholas Grimshaw, Hunter Lovins, William McDonough, Janine Benyus, and Danny Hillis. Although there is only one winner per year, the majority of the entries received are featured on the Buckminster Fuller Challenge website within a fully searchable database known as the Idea Index. References External links The Buckminster Fuller Institute The Idea Index Buckminster Fuller Recurring events established in 2007 Awards established in 2007 Challenge awards Invention awards
Buckminster Fuller Challenge
Technology
382
2,041,040
https://en.wikipedia.org/wiki/Fipronil
Fipronil is a broad-spectrum insecticide that belongs to the phenylpyrazole insecticide class. Fipronil disrupts the insect central nervous system by blocking the ligand-gated ion channel of the GABAA receptor (IRAC group 2B) and glutamate-gated chloride (GluCl) channels. This causes hyperexcitation of contaminated insects' nerves and muscles. Fipronil's specificity towards insects is believed to be due to its greater binding affinity for the GABAA receptors of insects than to those of mammals, and for its action on GluCl channels, which do not exist in mammals. , there does not appear to be significant resistance among fleas to fipronil. Fipronil is used as the active ingredient in flea control products for pets and home roach baits as well as field pest control for corn, golf courses, and commercial turf. Its widespread use makes its specific effects the subject of considerable attention. Observations on possible harm to humans or ecosystems are ongoing as well as the monitoring of pesticide resistance development. Physical properties Fipronil (IUPAC name 5-amino-1-[2,6-dichloro-4-(trifluoromethyl)phenyl]-4-(trifluoromethylsulfinyl)pyrazole-3-carbonitrile) is a white, solid powder with a moldy odor. It is degraded slightly by sunlight, stable at normal temperatures for one year, and is not stable in presence of metal ions. Use Fipronil has been used against many different pests on different crops. It is used against major lepidopteran (moth, butterfly, etc.) and orthopteran (grasshopper, locust, etc.) pests on a range of field and horticultural crops and against coleopteran (beetle) larvae in soils. It is employed for cockroach and ant control as well as locust control and termite pest control. In the United States of America, fipronil was approved for use against the Rasberry crazy ant until 2022 in counties of Texas where positive identification had been made by entomologists from the Texas Department of Agriculture and the Environmental Protection Agency. In New Zealand, fipronil was used in trials to control wasps (Vespula ), which are a threat to indigenous biodiversity. It is now being used by the Department of Conservation to attempt local eradication of wasps, and is being recommended for control of the invasive Argentine ant. Fipronil is also the active ingredient in many commercial tick and flea treatments for pets. Effects Toxicity Fipronil is classed as a WHO Class II moderately hazardous pesticide, and has a rat acute oral of 97 mg/kg. It has moderate acute toxicity by the oral and inhalation routes in rats. Dermal absorption in rats is less than 1% after 24 hours exposure and toxicity is considered to be low. It has been found to be very toxic to rabbits. The photodegradate MB46513 or desulfinylfipronil, appears to have a higher acute toxicity to mammals than fipronil itself by a factor of about 10. Symptoms of acute toxicity via ingestion includes sweating, nausea, vomiting, headache, abdominal pain, dizziness, agitation, weakness, and tonic-clonic seizures. Clinical signs of exposure to fipronil are generally reversible and resolve spontaneously. As of 2011, no data were available regarding the chronic effects of fipronil on humans. The United States Environmental Protection Agency has classified fipronil as a group C (possible human) carcinogen based on an increase in thyroid follicular cell tumors in both sexes of the rat. However, as of 2011, no human data are available regarding the carcinogenic effects of fipronil. Two Frontline TopSpot products were determined by the New York State Department of Environmental Conservation to pose no significant exposure risks to workers applying the product. However, concerns were raised about human exposure to Frontline spray treatment in 1996, leading to a denial of registration for the spray product. Commercial pet groomers and veterinary physicians were considered to be at risk from chronic exposure via inhalation and dermal absorption during the application of the spray, assuming they may have to treat up to 20 large dogs per day. Fipronil is not volatile, so the likelihood of humans being exposed to this compound in the air is low. In contrast to neonicotinoids such as acetamiprid, clothianidin, imidacloprid, and thiamethoxam, which are absorbed through the skin to some extent, fipronil is not absorbed substantially through the skin. Drinking water contamination In 2021, the US EPA put fipronil on the Draft Fifth Contaminant Candidate List (CCL 5) which can lead to future regulation under the Safe Drinking Water Act. Detection in body fluids Fipronil may be quantitated in plasma by gas chromatography-mass spectrometry or liquid chromatography-mass spectrometry to confirm a diagnosis of poisoning in hospitalized patients or to provide evidence in a medicolegal death investigation. Ecological toxicity Fipronil is highly toxic to crustaceans, insects (including bees and termites) and zooplankton, as well as rabbits, the fringe-toed lizard, and certain groups of gallinaceous birds. It appears to reduce the longevity and fecundity of female braconid parasitoids. It is also highly toxic to many fish, though its toxicity varies with species. Conversely, the substance is relatively innocuous to passerines, wildfowl, and earthworms. Its half-life in soil is four months to one year, but much less on the soil surface because it is more sensitive to light (photolysis) than water (hydrolysis). Few studies of effects on wildlife have been conducted, but studies of the nontarget impact from emergency applications of fipronil as barrier sprays for locust control in Madagascar showed adverse impacts of fipronil on termites, which appear to be very severe and long-lived. Also, adverse effects were indicated in the short term on several other invertebrate groups, one species of lizard (Trachylepis elegans), and several species of birds (including the Madagascar bee-eater). Nontarget effects on some insects (predatory and detritivorous beetles, some parasitic wasps and bees) were also found in field trials of fipronil for desert locust control in Mauritania, and very low doses (0.6-2.0 g a.i./ha) used against grasshoppers in Niger caused impacts on nontarget insects comparable to those found with other insecticides used in grasshopper control. The implications of this for other wildlife and the ecology of the habitat remain unknown, but appear unlikely to be severe. This lack of severity was not observed in bee species in South America. Fipronil is also used in Brazil and studies on the stingless bee Scaptotrigona postica have shown adverse reactions to the pesticide, including seizures, paralysis, and death with a lethal dose of .54 ng a.i./bee and a lethal concentration of .24 ng a.i./μl diet. These values are highly toxic in Scaptotrigona postica and bees in general. Toxic baiting with fipronil has been shown to be effective in locally eliminating German wasps. All colonies within foraging range were eliminated within one week. In May 2003, the French Directorate-General of Food at the Ministry of Agriculture determined that a case of mass bee mortality observed in southern France was related to acute fipronil toxicity. Toxicity was linked to defective seed treatment, which generated dust. In February 2003, the ministry decided to temporarily suspend the sale of BASF crop protection products containing fipronil in France. The seed treatment involved has since been banned. Notable results from wildlife studies include: Fipronil is highly toxic to fish and aquatic invertebrates. Its tendency to bind to sediments and its low water solubility may reduce the potential hazard to aquatic wildlife. Fipronil is toxic to bees and should not be applied to vegetation when bees are foraging. Based on ecological effects, fipronil is highly toxic to upland game birds on an acute oral basis and very highly toxic on a subacute dietary basis, but is practically nontoxic to waterfowl on both acute and subacute bases. Chronic (avian reproduction) studies show no effects at the highest levels tested in mallards (NOEC = 1000 ppm) or quail (NOEC = 10 ppm). The metabolite MB 46136 is more toxic than the parent to avian species tested (very highly toxic to upland game birds and moderately toxic to waterfowl on an acute oral basis). Fipronil is highly toxic to bluegill sunfish and highly toxic to rainbow trout on an acute basis. An early-lifestage toxicity study in rainbow trout found that fipronil affects larval growth with a NOEC of 0.0066 ppm and an LOEC of 0.015 ppm. The metabolite MB 46136 is more toxic than the parent to freshwater fish (6.3 times more toxic to rainbow trout and 3.3 times more toxic to bluegill sunfish). Based on an acute daphnia study using fipronil and three supplemental studies using its metabolites, fipronil is characterized as highly toxic to aquatic invertebrates. An invertebrate lifecycle daphnia study showed that fipronil affects length in daphnids at concentrations greater than 9.8 ppb. A lifecycle study in mysids shows fipronil affects reproduction, survival, and growth of mysids at concentrations less than 5 ppt. Acute studies of estuarine animals using oysters, mysids, and sheepshead minnows show that fipronil is highly acutely toxic to oysters and sheepshead minnows, and very highly toxic to mysids. Metabolites MB 46136 and MB 45950 are more toxic than the parent to freshwater invertebrates (MB 46136 is 6.6 times more toxic and MB 45950 is 1.9 times more toxic to freshwater invertebrates). Colony collapse disorder Fipronil is one of the main chemical causes blamed for the spread of colony collapse disorder among bees. It has been found by the Minutes-Association for Technical Coordination Fund in France that even at very low nonlethal doses for bees, the pesticide still impairs their ability to locate their hive, resulting in large numbers of forager bees lost with every pollen-finding expedition. A synergistic toxic effect of fipronil with the fungal pathogen Nosema ceranae was recently reported. The functional basis for this toxic effect is now understood: the synergy between fipronil and the pathogenic fungus induces changes in male bee physiology leading to infertility. A 2013 report by the European Food Safety Authority identified fipronil as "a high acute risk to honeybees when used as a seed treatment for maize" and on July 16, 2013, the EU voted to ban the use of fipronil on maize and sunflowers within the EU. The ban took effect at the end of 2013. Pharmacodynamics Fipronil acts by binding to allosteric sites of GABAA receptors and GluCl receptors (of insects) as an antagonist (a form of noncompetitive inhibition). This prevents the opening of chloride ion channels normally encouraged by GABA, reducing the chloride ions' ability to lower a neuron's membrane potential. This results in an overabundance of neurons reaching action potential and likewise CNS toxicity via overstimulation. Acute oral (rat) 97 mg/kg Acute dermal LD50 (rat) >2000 mg/kg In mammals (including humans) fipronil overdose is characterized by vomiting, agitation, and seizures. Intravenous or intramuscular benzodiazepines are a useful antidote. History Development Fipronil was discovered and developed by Rhône-Poulenc between 1985 and 1987, and placed on the market in 1993 under the . Between 1987 and 1996, fipronil was evaluated on more than 250 insect pests on 60 crops worldwide, and crop protection accounted for about 39% of total fipronil production in 1997. Since 2003, BASF holds the patent rights for producing and selling fipronil-based products in many countries. 2017 fipronil eggs contamination The 2017 fipronil eggs contamination is an incident in Europe and South Korea involving the spread of insecticide contaminated eggs and egg products. Chicken eggs were found to contain fipronil and distributed to 15 European Union countries, Switzerland, and Hong Kong. Approximately 700,000 eggs are thought to have reached shelves in the UK alone. Eggs at 44 farms in Taiwan were also found with excessive fipronil levels. References Further reading External links Fipronil Fact Sheet - National Pesticide Information Center Chloroarenes Convulsants Endocrine disruptors GABAA receptor negative allosteric modulators Insecticides Nitriles Trifluoromethyl compounds Pyrazoles Sulfoxides Chloride channel blockers Neurotoxins Synthetic insecticides French inventions
Fipronil
Chemistry
2,828
48,540,695
https://en.wikipedia.org/wiki/Multidimensional%20seismic%20data%20processing
Multidimensional seismic data processing forms a major component of seismic profiling, a technique used in geophysical exploration. The technique itself has various applications, including mapping ocean floors, determining the structure of sediments, mapping subsurface currents and hydrocarbon exploration. Since geophysical data obtained in such techniques is a function of both space and time, multidimensional signal processing techniques may be better suited for processing such data. Data acquisition There are a number of data acquisition techniques used to generate seismic profiles, all of which involve measuring acoustic waves by means of a source and receivers. These techniques may be further classified into various categories, depending on the configuration and type of sources and receivers used. For example, zero-offset vertical seismic profiling (ZVSP), walk-away VSP etc. The source (which is typically on the surface) produces a wave travelling downwards. The receivers are positioned in an appropriate configuration at known depths. For example, in case of vertical seismic profiling, the receivers are aligned vertically, spaced approximately 15 meters apart. The vertical travel time of the wave to each of the receivers is measured and each such measurement is referred to as a “check-shot” record. Multiple sources may be added or a single source may be moved along predetermined paths, generating seismic waves periodically in order to sample different points in the sub-surface. The result is a series of check-shot records, where each check-shot is typically a two or three-dimensional array representing a spatial dimension (the source-receiver offset) and a temporal dimension (the vertical travel time). Data processing The acquired data has to be rearranged and processed to generate a meaningful seismic profile: a two-dimensional picture of the cross section along a vertical plane passing through the source and receivers. This consists of a series of processes: filtering, deconvolution, stacking and migration. Multichannel filtering Multichannel filters may be applied to each individual record or to the final seismic profile. This may be done to separate different types of waves and to improve the signal-to-noise ratio. There are two well-known methods of designing velocity filters for seismic data processing applications. Two-dimensional Fourier transform design The two-dimensional Fourier transform is defined as: where is the spatial frequency (also known as wavenumber) and is the temporal frequency. The two-dimensional equivalent of the frequency domain is also referred to as the domain. There are various techniques to design two-dimensional filters based on the Fourier transform, such as the minimax design method and design by transformation. One disadvantage of Fourier transform design is its global nature; it may filter out some desired components as well. τ-p transform design The τ-p transform is a special case of the Radon transform, and is simpler to apply than the Fourier transform. It allows one to study different wave modes as a function of their slowness values, . Application of this transform involves summing (stacking) all traces in a record along a slope (slant), which results in a single trace (called the p value, slowness or the ray parameter). It transforms the input data from the space-time domain to intercept time-slowness domain. Each value on the trace p is the sum of all the samples along the line The transform is defined by: The τ-p transform converts seismic records into a domain where all these events are separated. Simply put, each point in the τ-p domain is the sum of all the points in the x-t plane lying across a straight line with a slope p and intercept τ. That also means a point in the x-t domain transforms into a line in the τ-p domain, hyperbolae transform into ellipses and so on. Similar to the Fourier transform, a signal in the τ-p domain can also be transformed back into the x-t domain. Deconvolution During data acquisition, various effects have to be accounted for, such as near-surface structure around the source, noise, wavefront divergence and reverbations. It has to be ensured that a change in the seismic trace reflects a change in the geology and not one of the effects mentioned above. Deconvolution negates these effects to an extent and thus increases the resolution of the seismic data. Seismic data, or a seismogram, may be considered as a convolution of the source wavelet, the reflectivity and noise. Its deconvolution is usually implemented as a convolution with an inverse filter. Various well-known deconvolution techniques already exist for one dimension, such as predictive deconvolution, Kalman filtering and deterministic deconvolution. In multiple dimensions, however, the deconvolution process is iterative due to the difficulty of defining an inverse operator. The output data sample may be represented as: where represents the source wavelet, is the reflectivity function, is the space vector and is the time variable. The iterative equation for deconvolution is of the form: and , where Taking the Fourier transform of the iterative equation gives: This is a first-order one-dimensional difference equation with index , input , and coefficients that are functions of . The impulse response is , where represents the one-dimensional unit step function. The output then becomes: The above equation can be approximated as , if and Note that the output is the same as the output of an inverse filter. An inverse filter does not actually have to be realized and the iterative procedure can be easily implemented on a computer. Stacking Stacking is another process used to improve the signal-to-noise ratio of the seismic profile. This involves gathering seismic traces from points at the same depth and summing them. This is referred to as "Common depth-point stacking" or "Common midpoint stacking". Simply speaking, when these traces are merged, the background noise cancels itself out and the seismic signal add up, thus improving the SNR. Migration Assuming a seismic wave travelling upwards towards the surface, where is the position on the surface and is the depth. The wave's propagation is described by: Migration refers to this wave's backward propagation. The two-dimensional Fourier transform of the wave at depth is given by: To obtain the wave profile at , the wave field can be extrapolated to using a linear filter with an ideal response given by: where is the x component of the wavenumber, , is the temporal frequency and For implementation, a complex fan filter is used to approximate the ideal filter described above. It must allow propagation in the region (called the propagating region) and attenuate waves in the region (called the evanescent region). The ideal frequency response is shown in the figure. References External links Tau-P Processing of Seismic Refraction Data Reflections on the Deconvolution of Land Seismic Data Seismic profiling COMMON-MIDPOINT STACKING Geophysics
Multidimensional seismic data processing
Physics
1,435
40,328,918
https://en.wikipedia.org/wiki/Humidesulfovibrio%20idahonensis
Humidesulfovibrio idahonensis is a bacterium. It contains c-type cytochromes and reduces sulfate, sulfite, thiosulfate, elemental sulfur, DMSO, anthraquinone disulfonate and fumarate. The type strain is CY1T (=DSM 15450T =JCM 14124T). Originally described under Desulfovibrio, it was reassigned to Humidesulfovibrio by Waite et al. in 2020. References Further reading Staley, James T., et al. "Bergey's manual of systematic bacteriology, vol. 3."Williams and Wilkins, Baltimore, MD (1989): 2250–2251. *Bélaich, Jean-Pierre, Mireille Bruschi, and Jean-Louis Garcia, eds. Microbiology and biochemistry of strict Anaerobes Involved in interspecies hydrogen transfer. No. 54. Springer, 1990. External links LPSN Type strain of Desulfovibrio idahonensis at BacDive - the Bacterial Diversity Metadatabase Bacteria described in 2009 Desulfovibrionales
Humidesulfovibrio idahonensis
Biology
239
357,125
https://en.wikipedia.org/wiki/Imaginary%20friend
Imaginary friends (also known as pretend friends, invisible friends or made-up friends) are a psychological and a social phenomenon where a friendship or other interpersonal relationship takes place in the imagination rather than physical reality. Although they may seem real to their creators, children usually understand that their imaginary friends are not real. The first studies focusing on imaginary friends are believed to have been conducted during the 1890s. There is little research about the concept of imaginary friends in children's imaginations. Klausen and Passman (2007) report that imaginary companions were originally described as being supernatural creatures and spirits that were thought to connect people with their past lives. Adults in history have had entities such as household gods, guardian angels, and muses that functioned as imaginary companions to provide comfort, guidance and inspiration for creative work. It is possible the phenomenon appeared among children in the mid-19th century when childhood was emphasized as an important time for play and imagination. Description In some studies, imaginary friends are defined as children impersonating a specific character (imagined by them), or objects or toys that are personified. However, some psychologists will define an imaginary friend only as a separate created character. Imaginary friends can be people, but they can also take the shape of other characters such as animals or other abstract ideas such as ghosts, monsters, robots, aliens or angels. These characters can be created at any point during a lifetime, though Western culture suggests they are most acceptable in preschool- and school-age children. Most research agrees that girls are more likely than boys to develop imaginary friends. Once children reach school age, boys and girls are equally likely to have an imaginary companion. Research has often reiterated that there is not a specific "type" of child that creates an imaginary friend. Whenever children have a fantasy they may come to believe some imaginary world exists in another universe or create imaginary world for imaginary friends to live. Research has shown that imaginary friends are a normative part of childhood and even adulthood. Additionally, some psychologists suggest that imaginary friends are much like a fictional character created by an author. As Eileen Kennedy-Moore points out, "Adult fiction writers often talk about their characters taking on a life of their own, which may be an analogous process to children’s invisible friends." In addition, Marjorie Taylor and her colleagues have found that fiction writers are more likely than average to have had imaginary friends as children. There is a difference between the common imaginary friends that many children create, and the imaginary voices of psychopathology. Often when there’s a psychological disorder and any inner voices are present, they add negativity to the conversation. The person with the disorder may sometimes believe that the imagined voices are physically real, not an imagined inner dialog. Imaginary friends can serve various functions. Playing with imaginary friends enables children to enact behaviors and events they have not yet experienced. Imaginary play allows children to use their imagination to construct knowledge of the world. In addition, imaginary friends might also fulfill children's innate desire to connect with others before actual play among peers is common. According to psychologist Lev Vygotsky, cultural tools and interaction with people mediate psychological functioning and cognitive development. Imaginary friends, perceived as real beings, could teach children how to interact with others along with many other social skills. Vygotsky's sociocultural view of child development includes the notion of children's “zone of proximal development,” which is the difference between what children can do with and without help. Imaginary friends can aid children in learning things about the world that they could not learn without help, such as appropriate social behavior, and thus can act as a scaffold for children to achieve slightly above their social capability. In addition, imaginary friends also serve as a means for children to experiment with and explore the world. In this sense, imaginary companions also relate to Piaget's theory of child development because they are completely constructed by the child. According to Piaget, children are scientific problem solvers who self-construct experiences and build internal mental structures based on experimentation. The creation of and interaction with imaginary companions helps children to build such mental structures. The relationship between a child and their imaginary friend can serve as a catalyst for the formation of real relationships in later development and thus provides a head start to practising real-life interaction. Research It has been theorized that children with imaginary friends may develop language skills and retain knowledge faster than children without them, which may be because these children get more linguistic practice than their peers as a result of carrying out "conversations" with their imaginary friends. Kutner (n.d.) reported that 65% of 7-year-old children report they have had an imaginary companion at some point in their lives. He further reported: Imaginary friends are an integral part of many children's lives. They provide comfort in times of stress, companionship when they're lonely, someone to boss around when they feel powerless, and someone to blame for the broken lamp in the living room. Most important, an imaginary companion is a tool young children use to help them make sense of the adult world. Taylor, Carlson & Gerow (c2001: p. 190) hold that: despite some results suggesting that children with imaginary friends might be superior in intelligence, it is not true that all intelligent children create them. If imaginary friends can provide assistance to children in developing their social skills, they must function as important roles in the lives of children. Hoff (2004 – 2005) was interested in finding out the roles and functions of imaginary friends and how they impacted the lives of children. The results of her study have provided some significant insight on the roles of imaginary friends. Many of the children reported their imaginary friends as being sources of comfort in times of boredom and loneliness. Another interesting result was that imaginary friends served to be mentors for children in their academics. They were encouraging, provided motivation, and increased the self-esteem of children when they did well in school. Finally, imaginary friends were reported as being moral guides for children. Many of the children reported that their imaginary friends served as a conscience and helped them to make the correct decision in times where morality was questioned. Other professionals such as Marjorie Taylor feel imaginary friends are common among school-age children and are part of normal social-cognitive development. Part of the reason people believed children gave up imaginary companions earlier than has been observed is related to Piaget's stages of cognitive development. Piaget suggested that imaginary companions disappeared once children entered the concrete operational stage of development. Marjorie Taylor identified middle school children with imaginary friends and followed up six years later as they were completing high school. At follow-up, those who had imaginary friends in middle school displayed better coping strategies but a "low social preference for peers." She suggested that imaginary friends may directly benefit children's resiliency and positive adjustment. Because imagination play with a character involves the child often imagining how another person (or character) would act, research has been done to determine if having an imaginary companion has a positive effect on theory of mind development. In a previous study, Taylor & Carlson (1997) found that 4-year-old children who had imaginary friends scored higher on emotional understanding measures and that having a theory of mind would predict higher emotional understanding later on in life. When children develop the realization that other people have different thoughts and beliefs other than their own, they are able to grow in their development of theory of mind as they begin to have better understandings of emotions. Positive psychology The article "Pretend play and positive psychology: Natural companions" defined many great tools that are seen in children who engage pretend play. These five areas include creativity, coping, emotion regulation, empathy/emotional understanding and hope. Hope seems to be the underlying tool children use in motivation. Children become more motivated when they believe in themselves, therefore children will not be discouraged to come up with different ways of thinking because they will have confidence. Imaginary companionship displays immense creativity helping them to develop their social skills and creativity is frequently discussed term amongst positive psychology. An imaginary companion can be considered the product of the child's creativity whereas the communication between the imaginary friend and the child is considered to be the process. Adolescence "Imaginary companions in adolescence: sign of a deficient or positive development?" explores the extent to which adolescents create imaginary companions. The researchers explored the prevalence of imaginary companions in adolescence by investigating the diaries of adolescents age 12-17. In addition they looked at the characteristics of these imaginary companions and did a content analysis of the data obtained in the diaries. There were three hypotheses tested: (1) the deficit hypothesis, (2) the giftedness hypothesis, (3) the egocentrism hypothesis. The results of their study concluded that creative and socially competent adolescents with great coping skills were particularly prone to the creation of these imaginary friends. These findings did not support the deficit hypothesis or egocentrism hypothesis, further suggesting that these imaginary companions were not created with the aim to replace or substitute a real-life family member or friend, but they simply created another "very special friend". This is surprising because it is usually assumed that children who create imaginary companions have deficits of some sort, and it is unheard of for an adolescent to have an imaginary companion. Tulpa Following the popularizing and secularizing of the concept of tulpa in the Western world, these practitioners, calling themselves "tulpamancers", report an improvement to their personal lives through the practice, and new unusual sensory experiences. Some practitioners use the tulpa for sexual and romantic interactions, though the practice is considered taboo. A survey of the community with 118 respondents on the explanation of tulpas found 8.5% support a metaphysical explanation, 76.5% support a neurological or psychological explanation, and 14% "other" explanations. Nearly all practitioners consider the tulpa a real or somewhat-real person. The number of active participants in these online communities is in the low hundreds, and few meetings in person have taken place. Birth order To uncover the origin of imaginary companions and learn more about the children who create them, it is necessary to seek out children who have created imaginary companions. Unfortunately young children cannot accurately self-report, therefore the most effective way to gather information about children and their imaginary companions is by interviewing the people who spend the most time with them. Often mothers are the primary caretakers who spend the most time with a child. Therefore, for this study 78 mothers were interviewed and asked whether their child had an imaginary friend. If the mother revealed that their child did not have an imaginary companion then the researcher asked about the child's tendency to personify objects. In order to convey the meaning of personified objects the researchers explained to the mothers that it is common for children to choose a specific toy or object that they are particularly attached to or fond of. For the object to qualify as a personified object the child had to treat it as animate. Furthermore, it is necessary to reveal what children consider an imaginary friend or pretend play. In order to distinguish a child having or not having an imaginary companion, the friend had to be in existence for at least one month. In order to examine the developmental significance of preschool children and their imaginary companions the mothers of children were interviewed. The major conclusion from the study was that there is a significant distinction between invisible companions and personified objects. A significant finding in this study was the role of the child's birth order in the family in terms of having an imaginary companion or not. The results of the interviews with mothers indicated that children with imaginary friends were more likely to be a first-born child when compared to children who did not have an imaginary companion at all. This study further supports that children may create imaginary friends to work on social development. The findings that a first-born child is more likely to have an imaginary friend sheds some light on the idea that the child needs to socialize therefore they create the imaginary friend to develop their social skills. This is an extremely creative way for children to develop their social skills and creativity is frequently discussed term amongst positive psychology. An imaginary companion can be considered the product of creativity whereas the communication between the imaginary friend and the child is the process. In regards to birth order there is also research on children who do not have any siblings at all. The research in this area further investigates the notion that children create imaginary companions due to the absence of peer relationships. A study that examined the differences in self-talk frequency as a function of age, only-child, and imaginary childhood companion status provides a insight to the commonalties of children with imaginary companions. The researchers collected information from college students who were asked if they ever had an imaginary friend as a child (Brinthaupt & Dove, 2012). There were three trials in the study and the researchers found that there were significant differences in self-talk between different age groupings. Their first trial indicated that only children who create imaginary companions actually engage in high levels of positive self-talk had more positive social development. They also found that women were more likely than men to have had an imaginary companion. Their findings were consistent with other research which suggests that it is more common for females to have imaginary companions. The researchers suggested that women may be more likely to have imaginary companions because they are more likely to rely on feedback from persons other than themselves, thus supporting the theory that men have more self reinforcing self-talk. Furthermore, other research has concluded that women seek more social support than men, which could be another possibility for creating these imaginary companions. The second trial found that children without siblings reported more self-talk than children with siblings; the third trial found that the students who reported having an imaginary friend also reported more self-talk than the other students who did not have imaginary friends. When self-talk is negative, it is associated with effects such as increased anxiety and depression. The researchers concluded that "individuals with higher levels of social-assessment and critical self-talk reported lower self-esteem and more frequent automatic negative self-statements." When self-talk is positive, however, the study found that "people with higher levels of self-reinforcing self-talk reported more positive self-esteem and more frequent automatic positive self-statements". See also References Further reading Gleason, T. (2009). 'Imaginary companions.' In Harry T. Reis & Susan Sprecher (Eds.), Encyclopedia of Human Relationships (pp. 833–834). Thousand Oaks, CA: Sage. Hall, E. (1982). 'The fearful child's hidden talents [Interview with Jerome Kagan].' Psychology Today, 16 (July), 50–59. Partington, J., & Grant, C. (1984). 'Imaginary playmates and other useful fantasies.' In P. Smith (Ed.), Play in animals and humans (pp. 217–240). New York: Basil Blackwell. Imaginary Friends with Dr Evan Kidd podcast interview with Dr Evan Kidd of La Trobe University Children's games Developmental psychology Interpersonal relationships Friend Stock characters Fantasy tropes Science fiction themes Hallucinations Nonexistent things
Imaginary friend
Biology
3,092
25,817,713
https://en.wikipedia.org/wiki/Digital%20mailroom
Digital mailroom is the automation of incoming mail processes. Using document scanning and document capture technologies, companies can digitise incoming mail and automate the classification and distribution of mail within the organization. Both paper and electronic mail (email) can be managed through the same process allowing companies to standardize their internal mail distribution procedures and adhere to company compliance policies. Many companies still believe that they are legally bound to archive some documents as paper for a certain time, such as accounting documents or contracts. According to a recent survey by AIIM, legal admissibility of scanned documents is still seen as an issue in over a quarter of businesses. However, the reality is that these rules only apply to a small minority of documents. Most digitized documents are now legally admissible in a court of law. The new British Standard, BS 10008 "Evidential weight and legal admissibility of electronic information" covers this in detail. The culture of 'avoiding risk at all cost' is what compels companies to print and archive thousands of documents every day. Reasons for implementation Mail volumes continue to grow exponentially, stimulated by business growth and mobile workforces. For example, medium-sized companies now process 100,000 pieces of mail a month and service over 200 departments. In addition, the corporate mailroom, a vital link in the corporate information system, is struggling to keep abreast of this paper flow. Meanwhile, today's organisations demand instant, accurate information; US businesses spend over $500 billion annually turning the information on the documents they receive every day into useful data that they can use to run their business. The need for corporate compliance and accountability has also forced large corporations to invest heavily in information backup, storage systems, and compliance solutions. Some corporate mailrooms have benefited from the development of high-speed automation equipment designed for moving physical mail more efficiently through the system. However, the challenges are daunting, considering that most mailrooms are using one-piece-at-a-time visual identification and manual sorting methods. By digitizing the incoming mail process, and indexing the documents on the fly, companies can not only gain control of their mail processes internally (no more efficiency losses, gaps in document control and loss of valuable mail), but will have the opportunity to combine electronic mail formats (e-mail, fax) in the same document processing flow. A digital mailroom designed as a central platform for information allows an organization to bring rationality to mail processing and significant gains in productivity and customer service. Benefits Reducing the decision cycle One major benefit of turning all incoming paper mail into images as soon as it is received is the extent to which it shortens the decision cycle. Employees can access images more quickly, regardless of where the documents were physically acquired. Files can then be processed very rapidly according to their level of urgency. Just as digital mailrooms facilitate the exchange of company information, they also facilitate the coordination of several people around the same document. The decision-making process becomes quicker and more accurate. Rationalising the circulation of information The various technologies at the heart of digital mailrooms help companies rationalise their processes, e.g. it allows companies to reduce costs associated with resending documents between sites. Reducing paper costs Mailroom costs not only include staff costs involved in the distribution of letters, but also the costs associated with the resending, loss or deterioration of documents. A digital mailroom implementation has a direct effect on all those costs and becomes a key element of competitiveness for the company. Another source of paper costs is the one associated with the physical storage of documents. Encouraging employees "to do without paper" will quickly lead to the reduction in the cost of excessive printing and copying of documents. The aim is obviously not to ban the paper from the work environment but rather to set up a new coherent and secure organisation that makes the use of paper superfluous. Ensuring data tracking Ensuring incoming mail tracking has become a necessity for the majority of companies, with compliance regulation being a major factor. The earlier a document is transformed into an image file, the more reliably it can be tracked throughout its life cycle. Furthermore, a scanned document becomes accessible by all authorised users (as a PDF, TIFF or JPEG file). The file created includes more than simply images; it references one or more documents in the archive database and records all the actions carried out by the people responsible for the file. The security of the process guarantees the authenticity and integrity of the document, which aligns with the records management policy of the company. Improving customer service The electronic management of incoming mail improves the handling of documents within service oriented companies and agencies. It enhances the quality of the service offered to customers by allowing staff to instantly access customer files and answer questions immediately. The improvement of customer service is considered to be of fundamental importance by the majority of companies. Reduce dependency on physical office locations Digitizing mail upon its entry into an organization coupled with content management systems allows mail to be routed to appropriate staff members regardless of their physical location. The growing trend of remote work has lessened the overall need for real estate for the company mailroom. Many companies are now opting for mailroom software to automate their entire inbound and outbound mail operations. Technologies Document capture "Document capture" is the act of scanning paper documents so they can be archived and retrieved in their original image format. It is the most widespread imaging technology used by companies today. Software improvements now make it possible to capture paper documents while importing electronic files and to process them together through the same production platform. Both incoming paper and electronic mail can now be archived together at the same storage location. Another major change is the ability to scan documents from remote locations and to retrieve them through a web interface. This is known as distributed capture and provides many cost benefits to companies with multiple branch offices or remotely located staff. Data capture Originally, forms processing technologies were only able to extract and validate data from structured documents such as administrative forms. The improvements in OCR technology now make it possible to automatically extract all data from semi-structured documents (e.g. Invoices) – the technical acronym for this is Intelligent Document Capture (IDC). For fully unstructured documents (e.g. legal contracts, customer correspondence, and white mail), it is not yet possible to locate and extract all information. However, technologies have improved enough to identify the document type and automatically extract key information that can be used to index the document and/or route the document to the right department or recipient. Document classification Software using a graphical approach can analyze and classify mixed batches of structured or semi-structured documents in order to build a library of templates. Using this auto-generated library of templates, the software can then identify and extract data from any scanned document in a single flow. This image-based classification approach, combined with a full-text analysis of certain documents (based on a keyword search), are the main technologies used today to process semi- or unstructured documents. These innovative automatic classification technologies reduce the need for pre-sorting documents before the scanning process. As a result, companies receiving high volumes of paper mail can make significant cost reductions every year. Workflow Workflow applications enable electronic documents and information to circulate inside the company. They might have to manage very complex processes related to multiple locations due to the globalization of companies. The increasing importance of security is another vital challenge. One of the key developments in workflow technologies is around making company processes and workflow processes more consistent in order to avoid organizational changes when implementing these tools. Although company organizations tend to become increasingly complex, these software solutions are becoming simpler in terms of implementation and interfaces. Archiving Due to the high volume of documents that need to be archived by organizations today, it is critical that the documents can be stored rapidly, securely managed and quickly retrieved through a common user interface. Documents can usually be archived on a variety of electronic storage media and easily retrieved through a Web interface (thin client). There are many archiving solutions on the market today, some as a component of an ECM or document management solution and some as a stand-alone system specifically designed for the purpose of high volume, high speed archiving. Document and content management A document can be an image, a file stored and compressed in a tiff, gif or jpg electronic exchange format, or an MS Office file or a PDF file (Acrobat exchange format). Content generally includes all the above combined with any data/information as well as other electronic files such as e-mails and web pages. Content Management solutions need back-end repositories or databases (e.g. Oracle or MS SQL Server) to store the files and retrieval data. During the last decade, these software solutions have benefited from the universal XML standard used to index, store and access files to and from any repository. Relative to the other systems, content management systems manage more complex administrative, access and workflow rules in relation to the number of files and file formats it needs to support. Evolution of hardware The range of hardware available to turn paper documents into digital images has increased considerably in the last 10 years. Although desktop scanners and multi-function devices (MFDs) are now very affordable and well suited to a small office or departmental scanning requirements, the need for high speed, high volume document scanners is still evident. The speed, reliability and increased functionality of these high-end scanners can save considerable time and money in the long term. Today, it is possible to scan documents of different dimensions and formats in the same flow, scan colour documents, sort them physically and read data from them using OCR and barcode technologies during the scanning process. Processing speed has also significantly increased. This evolution, together with the existence of machines able to completely automate mail processing – opening envelopes, removing staples, scanning, sorting – play a significant role in the development of large volume paper processing such as mail processing. The next step for companies is to rationalize their mail processing to be as consistent as possible with their organizational structure, e.g. choosing between the implementation of a centralized digital mailroom and the implementation of decentralized mail scanning facilities or a combination of the two. Adoption According to a survey conducted by AIIM in 2009, centralized in-house scanning and mailroom scanning are set for considerable growth in take-up compared to outsourced scanning and capture. 48% of the survey respondents have a centralized, in-house scanning service, citing better indexing and closer integration with the process as the main benefits. References Implementing a Digital Mailroom – Datafinity, July 2012. AIIM Industry Watch – Document Scanning and Capture Survey, Q4 2009. External links AIIM Europe – ECM industry association GRM Mail Scanning Services SecureScan Digital Mailroom Automation Business process Mail delivery agents Postal systems
Digital mailroom
Technology
2,210
18,571,998
https://en.wikipedia.org/wiki/Osteolepiformes
Osteolepiformes, also known as Osteolepidida, is a group of prehistoric lobe-finned fishes which first appeared during the Devonian period. The order contains the families Canowindridae, Megalichthyidae, Osteolepididae and Tristichopteridae, in addition to several monotypic families. The order is generally considered to be paraphyletic because the characters that define it are mainly attributes of stem tetrapodomorphs. The following taxonomy is based on Borgen & Nakrem, 2016: Order Osteolepiformes Suborder Osteolepidoidei Family Osteolepididae Family Thursiidae Family Megalichthyidae Suborder Cyclolepidoidei Superfamily Eopodoidea Family Chrysolepididae Family Gyroptychiidae Family Panderichthyidae (incl. Elpistostegalia) Family Tristichopteridae Superfamily Parapodoidea Family Canowindridae Family Medoevididae Superfamily Rhizodontoidea Below is a cladogram showing the paraphyly of Osteolepiformes compiled and modified from Ahlberg and Johanson (1998). See also Swartz (2012). Osteolepiformes is marked by the green bracket. References External links Tree of Life Tetrapodomorph orders Paraphyletic groups
Osteolepiformes
Biology
293
7,661,576
https://en.wikipedia.org/wiki/Disgregation
In the history of thermodynamics, disgregation is an early formulation of the concept of entropy. It was defined in 1862 by Rudolf Clausius as the magnitude of the degree in which the molecules of a body are separated from each other. Disgregation was the stepping stone for Clausius to create the mathematical expression for the second law of thermodynamics. Clausius modeled the concept on certain passages in French physicist Sadi Carnot's 1824 paper On the Motive Power of Fire which characterized the transformations of working substances (particles of a thermodynamic system) of an engine cycle, namely "mode of aggregation". The concept was later extended by Clausius in 1865 in the formulation of entropy, and in Ludwig Boltzmann's 1870s developments including the diversities of the motions of the microscopic constituents of matter, described in terms of order and disorder. In 1949, Edward Armand Guggenheim developed the concept of energy dispersal. The terms disgregation and dispersal are near in meaning. Historical context In 1824, French physicist Sadi Carnot assumed that heat, like a substance, cannot be diminished in quantity and that it cannot increase. Specifically, he states that in a complete engine cycle ‘that when a body has experienced any changes, and when after a certain number of transformations it returns to precisely its original state, that is, to that state considered in respect to density, to temperature, to mode of aggregation, let us suppose, I say that this body is found to contain the same quantity of heat that it contained at first, or else that the quantities of heat absorbed or set free in these different transformations are exactly compensated.’ Furthermore, he states that ‘this fact has never been called into question’ and ‘to deny this would overthrow the whole theory of heat to which it serves as a basis.’ This famous sentence, which Carnot spent fifteen years thinking about, marks the start of thermodynamics and signals the slow transition from the older caloric theory to the newer kinetic theory, in which heat is a type of energy in transit. In 1862, Clausius defined what is now known as entropy or the energetic effects related to irreversibility as the “equivalence-values of transformations” in a thermodynamic cycle. Clausius then signifies the difference between “reversible” (ideal) and “irreversible” (real) processes: Definition In 1862, Clausius labelled the quantity of disgregation with the letter , and defined its change as the sum of changes in heat and enthalpy divided by the temperature of the system: Clausius introduced disgregation in the following passage: Equivalence-values of transformations Clausius states what he calls the “theorem respecting the equivalence-values of the transformations” or what is now known as the second law of thermodynamics, as such: Quantitatively, Clausius states the mathematical expression for this theorem is as follows. Let dQ be an element of the heat given up by the body to any reservoir of heat during its own changes, heat which it may absorb from a reservoir being here reckoned as negative, and T the absolute temperature of the body at the moment of giving up this heat, then the equation: must be true for every reversible cyclical process, and the relation: must hold good for every cyclical process which is in any way possible. Verbal justification Clausius then points out the inherent difficulty in the mental comprehension of this law by stating: "although the necessity of this theorem admits of strict mathematical proof if we start from the fundamental proposition above quoted, it thereby nevertheless retains an abstract form, in which it is with difficulty embraced by the mind, and we feel compelled to seek for the precise physical cause, of which this theorem is a consequence." The justification for this law, according to Clausius, is based on the following argument: To elaborate on this, Clausius states that in all cases in which heat can perform mechanical work, these processes always admit to being reduced to the “alteration in some way or another of the arrangement of the constituent parts of the body.” To exemplify this, Clausius moves into a discussion of change of state of a body, i.e. solid, liquid, gas. For instance, he states, “when bodies are expanded by heat, their molecules being thus separated from each other: in this case the mutual attractions of the molecules on the one hand, and external opposing forces on the other, insofar as any such are in operation, have to be overcome. Again, the state of aggregation of bodies is altered by heat, solid bodies rendered liquid, and both solid and liquid bodies being rendered aeriform: here likewise internal forces, and in general external forces also, have to be overcome.” Ice melting Clausius discusses the example of the melting of ice, a classic example which is used in almost all chemistry books to this day, and explains a representation of the mechanical equivalent of work related to this energetic change mathematically: Measurement As it is difficult to obtain direct measures of the interior forces that the molecules of the body exert on each other, Clausius states that an indirect way to obtain quantitative measures of what is now called entropy is to calculate the work done in overcoming internal forces: In the case of the interior forces, it would accordingly be difficult—even if we did not want to measure them, but only to represent them mathematically—to find a fitting expression for them which would admit of a simple determination of the magnitude. This difficulty, however, disappears if we take into calculation, not the forces themselves, but the mechanical work which, in any change of arrangement, is required to overcome them. The expressions for the quantities of work are simpler than those for the corresponding forces; for the quantities of work can be all expressed, without further secondary statements, by the numbers which, having reference to the same unit, can be added together, or subtracted from one another, however various the forces may be to which they refer. It is therefore convenient to alter the form of the above law by introducing, instead of the forces themselves, the work done in overcoming them. In this form it reads as follows: See also Entropy (energy dispersal) References Thermodynamic entropy
Disgregation
Physics
1,280
44,983,054
https://en.wikipedia.org/wiki/Fanuankuwel
Fanuankuwel is a "place of a whale with two tails" location in Pacific and Polynesian mythology, recorded in the traditional celestial navigation techniques of the Caroline Islands. Part of the Trigger fishes tied together mnemonic-navigational system, it is sometimes grouped with Kafeŕoor as a 'ghost island'. See also Celestial navigation Kafeŕoor Polynesian mythology Polynesian navigation Micronesian navigation Wa (watercraft) References Polynesian mythology Celestial navigation Mythological islands
Fanuankuwel
Astronomy
97
4,750,568
https://en.wikipedia.org/wiki/Necrobiosis
Necrobiosis is the physiological death of a cell, and can be caused by conditions such as basophilia, erythema, or a tumor. It is identified both with and without necrosis. Necrobiotic disorders are characterized by presence of necrobiotic granuloma on histopathology. Necrobiotic granuloma is described as aggregation of histiocytes around a central area of altered collagen and elastic fibers. Such a granuloma is typically arranged in a palisaded pattern. It is associated with necrobiosis lipoidica and granuloma annulare. Necrobiosis differs from apoptosis, which kills a damaged cell to protect the body from harm. References External links Cellular processes
Necrobiosis
Biology
154
12,439
https://en.wikipedia.org/wiki/Guanine
Guanine () (symbol G or Gua) is one of the four main nucleotide bases found in the nucleic acids DNA and RNA, the others being adenine, cytosine, and thymine (uracil in RNA). In DNA, guanine is paired with cytosine. The guanine nucleoside is called guanosine. With the formula C5H5N5O, guanine is a derivative of purine, consisting of a fused pyrimidine-imidazole ring system with conjugated double bonds. This unsaturated arrangement means the bicyclic molecule is planar. Properties Guanine, along with adenine and cytosine, is present in both DNA and RNA, whereas thymine is usually seen only in DNA, and uracil only in RNA. Guanine has two tautomeric forms, the major keto form (see figures) and rare enol form. It binds to cytosine through three hydrogen bonds. In cytosine, the amino group acts as the hydrogen bond donor and the C-2 carbonyl and the N-3 amine as the hydrogen-bond acceptors. Guanine has the C-6 carbonyl group that acts as the hydrogen bond acceptor, while a group at N-1 and the amino group at C-2 act as the hydrogen bond donors. Guanine can be hydrolyzed with strong acid to glycine, ammonia, carbon dioxide, and carbon monoxide. First, guanine gets deaminated to become xanthine. Guanine oxidizes more readily than adenine, the other purine-derivative base in DNA. Its high melting point of 350 °C reflects the intermolecular hydrogen bonding between the oxo and amino groups in the molecules in the crystal. Because of this intermolecular bonding, guanine is relatively insoluble in water, but it is soluble in dilute acids and bases. History The first isolation of guanine was reported in 1844 by the German chemist (1819–1885), who obtained it as a mineral formed from the excreta of sea birds, which is known as guano and which was used as a source of fertilizer; guanine was named in 1846. Between 1882 and 1906, Emil Fischer determined the structure and also showed that uric acid can be converted to guanine. Synthesis Trace amounts of guanine form by the polymerization of ammonium cyanide (). Two experiments conducted by Levy et al. showed that heating 10 mol·L−1 at 80 °C for 24 hours gave a yield of 0.0007%, while using 0.1 mol·L−1 frozen at −20 °C for 25 years gave a 0.0035% yield. These results indicate guanine could arise in frozen regions of the primitive earth. In 1984, Yuasa reported a 0.00017% yield of guanine after the electrical discharge of , , , and 50 mL of water, followed by a subsequent acid hydrolysis. However, it is unknown whether the presence of guanine was not simply a resultant contaminant of the reaction. 10NH3 + 2CH4 + 4C2H6 + 2H2O → 2C5H8N5O (guanine) + 25H2 A Fischer–Tropsch synthesis can also be used to form guanine, along with adenine, uracil, and thymine. Heating an equimolar gas mixture of CO, H2, and NH3 to 700 °C for 15 to 24 minutes, followed by quick cooling and then sustained reheating to 100 to 200 °C for 16 to 44 hours with an alumina catalyst, yielded guanine and uracil: 10CO + H2 + 10NH3 → 2C5H8N5O (guanine) + 8H2O Another possible abiotic route was explored by quenching a 90% N2–10%CO–H2O gas mixture high-temperature plasma. Traube's synthesis involves heating 2,4,5-triamino-1,6-dihydro-6-oxypyrimidine (as the sulfate) with formic acid for several hours. Biosynthesis Guanine is not synthesized de novo. Instead, it is split from the more complex molecule guanosine by the enzyme guanosine phosphorylase: guanosine + phosphate guanine + alpha-D-ribose 1-phosphate Guanine can be synthesized de novo, with the rate-limiting enzyme of inosine monophosphate dehydrogenase. Other occurrences and biological uses The word guanine derives from the Spanish loanword ('bird/bat droppings'), which itself is from the Quechua word , meaning 'dung'. As the Oxford English Dictionary notes, guanine is "A white amorphous substance obtained abundantly from guano, forming a constituent of the excrement of birds". In 1656 in Paris, a Mr. Jaquin extracted from the scales of the fish Alburnus alburnus so-called "pearl essence", which is crystalline guanine. In the cosmetics industry, crystalline guanine is used as an additive to various products (e.g., shampoos), where it provides a pearly iridescent effect. It is also used in metallic paints and simulated pearls and plastics. It provides shimmering luster to eye shadow and nail polish. Facial treatments using the droppings, or guano, from Japanese nightingales have been used in Japan and elsewhere, because the guanine in the droppings makes the skin look paler. Guanine crystals are rhombic platelets composed of multiple transparent layers, but they have a high index of refraction that partially reflects and transmits light from layer to layer, thus producing a pearly luster. It can be applied by spray, painting, or dipping. It may irritate the eyes. Its alternatives are mica, faux pearl (from ground shells), and aluminium and bronze particles. Guanine has a very wide variety of biological uses that include a range of functions ranging in both complexity and versatility. These include camouflage, display, and vision among other purposes. Spiders, scorpions, and some amphibians convert ammonia, as a product of protein metabolism in the cells, to guanine, as it can be excreted with minimal water loss. Guanine is also found in specialized skin cells of fish called iridocytes (e.g., the sturgeon), as well as being present in the reflective deposits of the eyes of deep-sea fish and some reptiles, such as crocodiles and chameleons. On 8 August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting building blocks of DNA and RNA (guanine, adenine and related organic molecules) may have been formed extra-terrestrially in outer space. See also Cytosine Guanine deaminase References External links Guanine MS Spectrum Guanine at chemicalland21.com Nucleobases Purines Cosmetics chemicals Organic minerals
Guanine
Chemistry
1,520
10,694,451
https://en.wikipedia.org/wiki/DSS%20%28NMR%20standard%29
Sodium trimethylsilylpropanesulfonate (DSS) is the organosilicon compound with the formula (CH3)3SiCH2CH2CH2SO3−Na+. It is the sodium salt of trimethylsilylpropanesulfonic acid. A white, water-soluble solid, it is used as a chemical shift standard for proton NMR spectroscopy of aqueous solutions. The chemical shift, specifically the signal for the trimethylsilyl group, is relatively insensitive to pH. The proton spectrum of DSS also exhibits resonances at 2.91 ppm (m), 1.75 ppm (m), and 0.63 ppm (m) at an intensity of 22% of the reference resonance at 0 ppm. Alternatives Sodium trimethylsilyl propionate (TSP) is a related compound used as an NMR standard. It uses a carboxylic acid instead of the sulfonic acid found in DSS to confer water solubility. As a weak acid, TSP is more sensitive to changes in pH. 4,4-Dimethyl-4-silapentane-1-ammonium trifluoroacetate (DSA) has also been proposed as an alternative, to overcome certain drawbacks of DSS. References Sulfonic acids Trimethylsilyl compounds Organic sodium salts Nuclear magnetic resonance
DSS (NMR standard)
Physics,Chemistry
298
3,500,739
https://en.wikipedia.org/wiki/Emulsifying%20wax
Emulsifying wax is a cosmetic emulsifying ingredient. The ingredient name is often followed by the initials NF, indicating that it conforms to the specifications of the National Formulary. Emulsifying wax is created when a wax material (either a vegetable wax of some kind or a petroleum-based wax) is treated with a detergent (typically sodium dodecyl sulfate or polysorbates) to cause it to make oil and water bind together into a smooth emulsion. It is a white waxy solid with a low fatty alcohol odor. According to the United States Pharmacopoeia - National Formulary (USP-NF), the ingredients for emulsifying wax NF are cetearyl alcohol and a polyoxyethylene derivative of a fatty acid ester of sorbitan (a polysorbate). In a cosmetic product, if the emulsifying wax used meets the standards for the National Formulary, it may be listed in the ingredient declaration by the term "emulsifying wax NF". Otherwise, the emulsifier is considered a blended ingredient and the individual components must be listed individually in the ingredient declaration, placed appropriately in descending order of predominance in the whole. Safety The Cosmetic Ingredient Review Expert Panel reviewed the safety and use of Emulsifying Wax NF in 1984. Their review of usage reported during the previous years found only 12 products using emulsifying wax; those all had usage rates under 10%. Over 35 animal and human studies were cited in the review; none showed more than minor irritation or reaction. The safety assessment found that Emulsifying Wax NF was safe to use as a cosmetic ingredient at the then-present practices and concentrations of use. The Cosmetic Ingredient Review Expert Panel revisited Emulsifying Wax NF in 2003. They found that it was used in 102 cosmetic products in 2002 at a maximum use concentration of 21% (in hair straighteners). Based on the data available in 2003, the CIR determined not to open a new safety assessment. References Waxes
Emulsifying wax
Physics
421
2,728,392
https://en.wikipedia.org/wiki/Alpha%20Delphini
Alpha Delphini (α Delphini, abbreviated Alpha Del, α Del) is a multiple star system in the constellation of Delphinus. It consists of a triple star, designated Alpha Delphini A, together with five faint, probably optical companions, designated Alpha Delphini B, C, D, E and F. A's two components are themselves designated Alpha Delphini Aa (officially named Sualocin , the historical name for the entire system) and Ab. Nomenclature α Delphini (Latinised to Alpha Delphini) is the system's Bayer designation. The designations of the six constituents as Alpha Delphini A to F, and those of A's components - Alpha Delphini Aa and Ab - derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU). The primary star's components Aa, Ab1, and Ab2 are also sometimes referred to as A, Ba, and Bb respectively, given that the outer pair have been resolved. The system bore an historical name, Sualocin, which arose as follows: Niccolò Cacciatore was the assistant to Giuseppe Piazzi, and later his successor as Director of the Palermo Observatory. The name first appeared in Piazzi's Palermo Star Catalogue. When the Catalogue was published in 1814, the unfamiliar names Sualocin and Rotanev were attached to Alpha and Beta Delphini, respectively. Eventually the Reverend Thomas Webb, a British astronomer, puzzled out the explanation. Cacciatore's name, Nicholas Hunter in English translation, would be Latinized to Nicolaus Venator. Reversing the letters of this construction produces the two star names. They have endured, the result of Cacciatore's little practical joke of naming the two stars after himself. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Sualocin for the component Alpha Delphini Aa on 12 September 2016 and it is now so included in the List of IAU-approved Star Names. In Chinese, (), meaning Good Gourd, refers to an asterism consisting of Alpha Delphini, Gamma2 Delphini, Delta Delphini, Beta Delphini and Zeta Delphini. Consequently, the Chinese name for Alpha Delphini itself is (, ). In Hindu astronomy, the star corresponded to one of the nakshatras named Dhanishta. Properties Alpha Delphini A is a spectroscopic binary star which has now been resolved using speckle interferometry. The components are separated by and have a 17-year orbit. Alpha Delphini Aa has a spectral type of B9IV. it is a subgiant that has begun to evolve away from the main sequence, is about 3.8 times as massive as the sun and about twice as hot. The spectral type of the secondary star cannot be determined as it is too close and too faint compared to the primary, but it has been shown to itself be a binary star with an orbit of 30 days. Spectral lines showing 30-day radial velocity changes are likely to belong to the faintest component, expected from its mass to be an F-type star. Then the more massive star of the inner pair is likely to be an A-type dwarf, possibly not detected in the spectral because rapid rotation blurs its absorption lines. The five faint companions have visual magnitudes around 11th to 13th magnitude and separations of 35" to 72". They all show motion relative to Alpha Delphini A, and have much smaller parallaxes. References Delphini, Alpha Delphini, 09 Delphinus Binary stars 7 B-type subgiants Sualocin 101958 7906 BD+15 4222 196867
Alpha Delphini
Astronomy
826
10,479,653
https://en.wikipedia.org/wiki/Cross-resistance
Cross-resistance is when something develops resistance to several substances that have a similar mechanism of action. For example, if a certain type of bacteria develops resistance to one antibiotic, that bacteria will also have resistance to several other antibiotics that target the same protein or use the same route to get into the bacterium. A real example of cross-resistance occurred for nalidixic acid and ciprofloxacin, which are both quinolone antibiotics. When bacteria developed resistance to ciprofloxacin, they also developed resistance to nalidixic acid because both drugs inhibit topoisomerase, a key enzyme in DNA replication. Due to cross-resistance, antimicrobial treatments like phage therapy can quickly lose their efficacy against bacteria. This makes cross-resistance an important consideration in designing evolutionary therapies. Definition Cross-resistance is the idea is that the development of resistance to one substance subsequently leads to resistance to one or more substances that can be resisted in a similar manner. It occurs when resistance is provided against multiple compounds through one single mechanism, like an efflux pump. This can keep concentrations of a toxic substance at low levels and can do so for multiple compounds. Increasing the activity of such a mechanism in response to one compound then also has a similar effect on the others. The precise definition of cross-resistance depends on the field of interest. Pest management In pest management, cross-resistance is defined as the development of resistance by pest populations to multiple pesticides within a chemical family. Similar to the case of microbes, this may occur due to sharing binding target sites. For example, cadherin mutations may result in cross resistance in H. armigera to Cry1Aa and Cry1Ab. There also exists multiple resistance in which resistance to multiple pesticides occurs via different resistance mechanisms as opposed to the same mechanisms. Microorganisms In another case it is defined as the resistance of a virus to a new drug as a result of previous exposure to another drug. Or in the context of microbes, it is the resistance to multiple different antimicrobial agents as a result of a single molecular mechanism. Antibiotic resistance Cross-resistance is highly involved in the widespread issue of antibiotic resistance; an area of clinical relevance. There is a continued increase in the development of multidrug resistance in bacteria. This is partially due to the widespread use of antimicrobial compounds in diverse environments. But resistance to antibiotics can arise in multiple ways, not necessarily being the result of exposure to an antimicrobial compound. Structural similarity Cross-resistance can take place between compounds that are chemically similar, like antibiotics within similar and different classes. That said, structural similarity is a weak predictor of antibiotic resistance, and does not predict antibiotic resistance at all when aminoglycosides are disregarded in the comparison. Target similarity Cross resistance will most commonly occur due to target similarity. This is possible when antimicrobial agents have the same target, initiate cell death in a similar manner or have a similar route of access. An example is cross-resistance between antibiotics and disinfectants. Exposure to certain disinfectants can lead to the increased expression of genes that encode for efflux pumps that are able to maintain low levels of antibiotics. Thus, the same mechanism that is used to clear the disinfectant compound from the cell can also be used to clear antibiotics from the cell. Another example is cross-resistance between antibiotics and metals. As mentioned before, compounds do not have to be similar in structure in order to lead to cross-resistance. It can also occur when the same mechanism is used to remove the compound from the cell. In the bacteria Listeria monocytogenes a multi-drug efflux transporter has been found that could export both metals and antibiotics. Experimental work has shown that exposure to zinc can lead to increased levels of bacterial resistance to antibiotics. Several other studies have reported cross-resistance to various types of metals and antibiotics. These worked through several mechanisms, like drug efflux systems and disulphide bond formation systems. The possible implication of this is that not only the presence of antibacterial compounds can lead to the development of resistance against antibiotics, but also environmental factors like exposure to heavy metals. Collateral sensitivity Collateral sensitivity occurs when developing multidrug resistance causes a bacteria to develop sensitivity to other drugs. Such developments can be exploited by researchers in effort to combat the harms created by cross resistance to commonly used antibiotics. Increased sensitivity to an antibiotic means that a lower concentration of antibiotic can be used to achieve adequate growth inhibition. Collateral sensitivity and antibiotic resistance exist as a trade off, in which the benefits gained by antibiotic resistance are balanced by the risks introduced by collateral sensitivity. See also Drug resistance Pesticide resistance References Toxicology Pesticides Agricultural pests Evolutionary biology Antimicrobial resistance
Cross-resistance
Biology,Environmental_science
991
77,979,018
https://en.wikipedia.org/wiki/Pirepemat
Pirepemat (; developmental code name IRL752 or IRL-752) is a drug which is under development for the prevention of falls in people with Parkinson's disease and Parkinson's disease dementia. It has been referred to as a "nootrope" (i.e., nootropic or cognitive enhancer). Pharmacology Pirepemat shows affinity for several neurotransmitter receptors and transporters. These include the serotonin 5-HT7 receptor (Ki = 980nM), the sigma σ1 receptor (Ki = 1,200nM), the serotonin transporter (SERT) (Ki = 2,500nM), the α2C-adrenergic receptor (Ki = 3,800nM), the α2A-adrenergic receptor (Ki = 6,500nM), the serotonin 5-HT2C receptor (Ki = 6,600nM), the serotonin 5-HT2A receptor (Ki = 8,100nM), and the norepinephrine transporter (NET) (Ki = 8,100nM). It also shows affinity for the rat κ-opioid receptor (KOR) (Ki = 6,500nM) and has weak affinity for the α1-adrenergic receptor (Ki = 21,000nM). The drug was an antagonist or inhibitor at all assessed targets (which included some but not all of the preceding sites). Pirepemat has been described as a "cortical enhancer" and has been reported to region-specifically increase norepinephrine, dopamine, and acetylcholine levels in the cerebral cortex. Serotonin 5-HT7 receptor antagonism and α2-adrenergic receptor antagonism were hypothesized to underlie these effects. In animals, pirepemat has been found to reverse hypoactivity induced by the dopamine depleting agent tetrabenazine whilst not increasing basal locomotor activity and not affecting or minimally influencing dextroamphetamine- and dizocilpine-induced locomotor hyperactivity. Clinical trials The drug was reported to improve motivation and reduce apathy in people with Parkinson's disease in a phase 2a clinical trial. As of September 2024, pirepemat is in phase 2 clinical trials for Parkinson's disease. A phase 3 trial is being planned. The drug was also under development for the treatment of "behavioral disorders" and attention deficit hyperactivity disorder (ADHD). However, no recent development for the former indication has been reported and development for ADHD was discontinued. In August 2020, pirepemat received an with a novel suffix reflecting its reputedly new and unique mechanism of action. Pirepemat is under development by Integrative Research Laboratories (IRLAB). See also Mesdopetam References External links Pirepemat (IRL752) - IRLAB 5-HT2A antagonists 5-HT2C antagonists 5-HT7 antagonists Alpha-2 blockers Enantiopure drugs Experimental drugs Fluoroarenes Methoxy compounds Nootropics Opioid modulators Phenethylamines Pro-motivational agents Pyrrolidines Serotonin–norepinephrine reuptake inhibitors Sigma receptor ligands
Pirepemat
Chemistry
738
78,594,739
https://en.wikipedia.org/wiki/Algebraic%20closure%20%28convex%20analysis%29
Algebraic closure of a subset of a vector space is the set of all points that are linearly accessible from . It is denoted by or . A point is said to be linearly accessible from a subset if there exists some such that the line segment is contained in . Necessarily, (the last inclusion holds when X is equipped by any vector topology, Hausdorff or not). The set A is algebraically closed if . The set is the algebraic boundary of A in X. Examples The set of rational numbers is algebraically closed but is not algebraically open If then . In particular, the algebraic closure need not be algebraically closed. Here, . However, for every finite-dimensional convex set A. Moreover, a convex set is algebraically closed if and only if its complement is algebraically open. See also Algebraic interior References Bibliography Convex analysis Functional analysis Mathematical analysis Topology
Algebraic closure (convex analysis)
Physics,Mathematics
177
758,833
https://en.wikipedia.org/wiki/Past
The past is the set of all events that occurred before a given point in time. The past is contrasted with and defined by the present and the future. The concept of the past is derived from the linear fashion in which human observers experience time, and is accessed through memory and recollection. In addition, human beings have recorded the past since the advent of written language. In English, the word past was one of the many variant forms and spellings of passed, the past participle of the Middle English verb passen (whence Modern English pass), among ypassed, ypassyd, i-passed, passyd, passid, pass'd, paste, etc. It developed into an adjective and preposition in the 14th century, and a noun (as in the past or a past, through ellipsis with the adjective past) in the 15th century. Grammar In English grammar, actions are classified according to one of the following twelve verb tenses: past (past, uses of English verb forms, past perfect, or past perfect continuous), present (present, present continuous, present perfect, or present perfect continuous), or future (future, future continuous, future perfect, or future perfect continuous). The past tense refers to actions that have already happened. For example, "she is walking" refers to a girl who is currently walking (present tense), while "she walked" refers to a girl who was walking before now (past tense). The past continuous tense refers to actions that continued for a period of time, as in the sentence "she was walking," which describes an action that was still happening in a prior window of time to which a speaker is presently referring. The past perfect tense is used to describe actions that were already completed by a specific point in the past. For example, "she had walked" describes an action that took place in the past and was also completed in the past. The past perfects continuous tense refers to an action that was happening up until a particular point in the past but was completed. It is different from the past perfect tense because the emphasis of past perfect continuous verbs is not on the action having been completed by the present moment, but rather on its having taken place actively over a time period before another moment in the past. The verb tense used in the sentence "She had been walking in the park regularly before I met her" is past perfect continuous because it describes an action ("walking") that was actively happening before a time when something else in the past was happening (when "I met her"). Depending on its usage in a sentence, "past" can be described using a variety of terms. Synonyms for "past" as an adjective include, "former," "bygone," "earlier," "preceding," and "previous." Synonyms for "past" as a noun include, "history, "background," "life story," and "biography." Synonyms of "past" as a preposition include, "in front of," "beyond," "by," and "in excess of." Other uses The word "past" can also be used to describe the offices of those who have previously served in an organization, group, or event such as, "past president," or, "past champions." "Past" can also refer to something or someone being at or in a position that is further than a particular point. For instance, in the sentence, "I live on Fielding Road, just past the train station," the word "past" is used to describe a location (the speaker's residence) beyond a certain point (the train station). Alternatively, the sentence, "He ran past us at full speed," utilizes the concept of the past to describe the position of someone ("He") that is further than the speaker. The "past" is also used to define a time that is a certain number of minute before or after a particular hour, as in "We left the party at half-past twelve." People also use "past" to refer to being beyond a particular biological age or phase of being, as in, "The boy was past the age of needing a babysitter," or, "I'm past caring about that problem." The "past" is commonly used to refer to history, either generally or with regards to specific time periods or events, as in, "Past monarchs had absolute power to determine the law in contrast to many European Kings and Queens of today." Nineteenth-century British author Charles Dickens created one of the best-known fictional personifications of the "past" in his short book, "A Christmas Carol." In the story, the Ghost of Christmas Past is an apparition that shows the main character, a cold-hearted and tight-fisted man named Ebenezer Scrooge, vignettes from his childhood and early adult life to teach him that joy does not necessarily come from wealth. Fields of study The past is the object of study within such fields as time, life, history, nostalgia, archaeology, archaeoastronomy, chronology, geology, historical geology, historical linguistics, ontology, paleontology, paleobotany, paleoethnobotany, palaeogeography, paleoclimatology, etymology and physical cosmology. See also References Philosophy of time Time
Past
Physics,Mathematics
1,117
6,997,890
https://en.wikipedia.org/wiki/Polypyrimidine%20tract
The polypyrimidine tract is a region of pre-messenger RNA (mRNA) that promotes the assembly of the spliceosome, the protein complex specialized for carrying out RNA splicing during the process of post-transcriptional modification. The region is rich with pyrimidine nucleotides, especially uracil, and is usually 15–20 base pairs long, located about 5–40 base pairs before the 3' end of the intron to be spliced. A number of protein factors bind to or associate with the polypyrimidine tract, including the spliceosome component U2AF and the polypyrimidine tract-binding protein (PTB), which plays a regulatory role in alternative splicing. PTB's primary function is in exon silencing, by which a particular exon region normally spliced into the mature mRNA is instead left out, resulting in the expression of an isoform of the protein for which the mRNA codes. Because PTB is ubiquitously expressed in many higher eukaryotes, it is thought to suppress the inclusion of "weak" exons with poorly defined splice sites. However, PTB binding is not sufficient to suppress "robust" exons. The suppression or selection of exons is critical to the proper expression of tissue-specific isoforms. For example, smooth muscle and skeletal muscle express alternate isoforms distinguished by mutually exclusive exon selection in alpha-tropomyosin. References Gene expression Spliceosome
Polypyrimidine tract
Chemistry,Biology
314
47,074,767
https://en.wikipedia.org/wiki/Lifshitz%20theory%20of%20van%20der%20Waals%20force
In condensed matter physics and physical chemistry, the Lifshitz theory of van der Waals forces, sometimes called the macroscopic theory of van der Waals forces, is a method proposed by Evgeny Mikhailovich Lifshitz in 1954 for treating van der Waals forces between bodies which does not assume pairwise additivity of the individual intermolecular forces; that is to say, the theory takes into account the influence of neighboring molecules on the interaction between every pair of molecules located in the two bodies, rather than treating each pair independently. Need for a non-pairwise additive theory The van der Waals force between two molecules, in this context, is the sum of the attractive or repulsive forces between them; these forces are primarily electrostatic in nature, and in their simplest form might consist of a force between two charges, two dipoles, or between a charge and a dipole. Thus, the strength of the force may often depend on the net charge, electric dipole moment, or the electric polarizability () (see for example London force) of the molecules, with highly polarizable molecules contributing to stronger forces, and so on. The total force between two bodies, each consisting of many molecules in the van der Waals theory is simply the sum of the intermolecular van der Waals forces, where pairwise additivity is assumed. That is to say, the forces are summed as though each pair of molecules interacts completely independently of their surroundings (See Van der Waals forces between Macroscopic Objects for an example of such a treatment). This assumption is usually correct for gasses, but presents a problem for many condensed materials, as it is known that the molecular interactions may depend strongly on their environment and neighbors. For example, in a conductor, a point-like charge might be screened by the electrons in the conductance band, and the polarizability of a condensed material may be vastly different from that of an individual molecule. In order to correctly predict the van der Waals forces of condensed materials, a theory that takes into account their total electrostatic response is needed. General principle The problem of pairwise additivity is completely avoided in the Lifshitz theory, where the molecular structure is ignored and the bodies are treated as continuous media. The forces between the bodies are now derived in terms of their bulk properties, such as dielectric constant and refractive index, which already contain all the necessary information from the original molecular structure. The original Lifshitz 1955 paper proposed this method relying on quantum field theory principles, and is, in essence, a generalization of the Casimir effect, from two parallel, flat, ideally conducting surfaces, to two surfaces of any material. Later papers by Langbein, Ninham, Parsegian and Van Kampen showed that the essential equations could be derived using much simpler theoretical techniques, an example of which is presented here. Hamaker constant The Lifshitz theory can be expressed as an effective Hamaker constant in the van der Waals theory. Consider, for example, the interaction between an ion of charge , and a nonpolar molecule with polarizability at distance . In a medium with dielectric constant , the interaction energy between a charge and an electric dipole is given by with the dipole moment of the polarizable molecule given by , where is the strength of the electric field at distance from the ion. According to Coulomb's law: so we may write the interaction energy as Consider now, how the interaction energy will change if the right hand molecule is replaced with a medium of density of such molecules. According to the "classical" van der Waals theory, the total force will simply be the summation over individual molecules. Integrating over the volume of the medium (see the third figure), we might expect the total interaction energy with the charge to be But this result cannot be correct, since It is well known that a charge in a medium of dielectric constant at a distance from the plane surface of a second medium of dielectric constant experiences a force as if there were an 'image' charge of strength at distance D on the other side of the boundary. The force between the real and image charges must then be and the energy, therefore Equating the two expressions for the energy, we define a new effective polarizability that must obey Similarly, replacing the real charge with a medium of density and polarizability gives an expression for . Using these two relations, we may restate our theory in terms of an effective Hamaker constant. Specifically, using McLachlan's generalized theory of VDW forces the Hamaker constant for an interaction potential of the form between two bodies at temperature is with , where and are Boltzmann's and Planck's constants correspondingly. Inserting our relations for and approximating the sum as an integral , the effective Hamaker constant in the Lifshitz theory may be approximated as We note that are real functions, and are related to measurable properties of the medium; thus, the Hamaker constant in the Lifshitz theory can be expressed in terms of observable properties of the physical system. Experimental validation The macroscopic theory of van der Waals theory has many experimental validations. Among which, some of the most notable ones are Derjaguin (1960); Derjaguin, Abrikosova and Lifshitz (1956) and Israelachvili and Tabor (1973), who measured the balance of forces between macroscopic bodies of glass, or glass and mica; Haydon and Taylor (1968), who measured the forces across bilayers by measuring their contact angle; and lastly Shih and Parsegian (1975), who investigated van der Waals potentials between heavy alkali-metal atoms and gold surfaces using atomic-beam-deflection. References Physical chemistry Condensed matter physics
Lifshitz theory of van der Waals force
Physics,Chemistry,Materials_science,Engineering
1,216
47,655,354
https://en.wikipedia.org/wiki/Double%20encoding
Double encoding is the act of encoding data twice in a row using the same encoding scheme. It is usually used as an attack technique to bypass authorization schemes or security filters that intercept user input. In double encoding attacks against security filters, characters of the payload that are treated as illegal by those filters are replaced with their double-encoded form. Double URI-encoding is a special type of double encoding in which data is URI-encoded twice in a row. It has been used to bypass authorization schemes and security filters against code injection, directory traversal, cross-site scripting (XSS) and SQL injection. Description In double encoding, data is encoded twice in a row using the same encoding scheme, that is, double-encoded form of data X is Encode(Encode(X)) where Encode is an encoding function. Double encoding is usually used as an attack technique to bypass authorization schemes or security filters that intercept user input. In double encoding attacks against security filters, characters of the payload that are treated as illegal by those filters are replaced with their double-encoded form. Security filters might treat data X and its encoded form as illegal. However, it is still possible for Encode(Encode(X)), which is the double-encoded form of data X, to not to be treated as illegal by security filters and hence pass through them, but later on, the target system might use the double-decoded form of Encode(Encode(X)), which is X, something that the filters would have been treated as illegal. Double URI-encoding Double URI-encoding, also referred to as double percent-encoding, is a special type of double encoding in which data is URI-encoded twice in a row. In other words, double-URI-encoded form of data X is URI-encode(URI-encode(X)). For example for calculating double-URI-encoded form of <, first < is URI-encoded as %3C which then in turn is URI-encoded as %253C, that is, double-URI-encode(<) = URI-encode(URI-encode(<)) = URI-encode(%3C) = %253C. As another example, for calculating double-URI-encoded form of ../, first ../ is URI-encoded as %2E%2E%2F which then in turn is URI-encoded as %252E%252E%252F, that is, double-URI-encode(../) = URI-encode(URI-encode(../)) = URI-encode(%2E%2E%2F) = %252E%252E%252F. Double URI-encoding is usually used as an attack technique against web applications and web browsers to bypass authorization schemes and security filters that intercept user input. For example because . and its URI-encoded form %2E are used in some directory traversal attacks, they are usually treated as illegal by security filters. However, it is still possible for %252E, which is the double-URI-encoded form of ., to not to be treated as illegal by security filters and hence pass through them, but later on, when the target system is building the path related to the directory traversal attack it might use the double-URI-decoded form of %252E, which is ., something that the filters would have been treated as illegal. Double URI-encoding attacks have been used to bypass authorization schemes and security filters against code injection, directory traversal, XSS and SQL injection. Prevention Decoding some user input twice using the same decoding scheme, once before a security measure and once afterwards, may allow double encoding attacks to bypass that security measure. Thus, to prevent double encoding attacks, all decoding operations on user input should occur before authorization schemes and security filters that intercept user input. Examples PHP In PHP programming language, data items in $_GET and $_REQUEST are sufficiently URI-decoded and thus programmers should avoid calling the urldecode function on them. Calling the urldecode function on data that has been read from $_GET or $_REQUEST causes the data to be URI-decoded once more than it should and hence may open possibility for double URI-encoding attacks. Directory traversal In the following PHP program, the value of $_GET["file"] is used to build the path of the file to be sent to the user. This opens the possibility for directory traversal attacks that incorporate their payload into the HTTP GET parameter file. As a security filter against directory traversal attacks, this program searches the value it reads from $_GET["file"] for directory traversal sequences and exits if it finds one. However, after this filter, the program URI-decodes the data that it has read from $_GET["file"], which makes it vulnerable to double URI-encoding attacks. <?php /* Note that $_GET is already URI-decoded */ $path = $_GET["file"]; /* Security filter */ /* Exit if user input contains directory traversal sequence */ if (strstr($path, "../") or strstr($path, "..\\")) { exit("Directory traversal attempt detected."); } /* URI-decode user input once again */ $path = urldecode($path); /* Build file path to be sent using user input */ echo htmlentities(file_get_contents("uploads/" . $path)); This filter prevents payloads such as ../../../../etc/passwd and its URI-encoded form %2E%2E%2F%2E%2E%2F%2E%2E%2F%2E%2E%2Fetc%2Fpasswd. However, %252E%252E%252F%252E%252E%252F%252E%252E%252F%252E%252E%252Fetc%252Fpasswd, which is the double-URI-encoded form of ../../../../etc/passwd, will bypass this filter. When double-URI-encoded payload %252E%252E%252F%252E%252E%252F%252E%252E%252F%252E%252E%252Fetc%252Fpasswd is used, the value of $_GET["file"] will be %2E%2E%2F%2E%2E%2F%2E%2E%2F%2E%2E%2Fetc%2Fpasswd which doesn't contain any directory traversal sequence and thus passes through the filter and will be given to the urldecode function which returns ../../../../etc/passwd, resulting in a successful attack. XSS In the following PHP program, the value of $_GET["name"] is used to build a message to be shown to the user. This opens the possibility for XSS attacks that incorporate their payload into the HTTP GET parameter name. As a security filter against XSS attacks, this program sanitizes the value it reads from $_GET["name"] via the htmlentities function. However, after this filter, the program URI-decodes the data that it has read from $_GET["name"], which makes it vulnerable to double URI-encoding attacks. <?php /* Note that $_GET is already URI-decoded */ $name = $_GET["name"]; /* Security filter */ /* Sanitize user input via htmlentity */ $name = htmlentities($name); /* URI-decode user input once again */ $name = urldecode($name); /* Build message to be shown using user input */ echo "Hello " . $name; This filter prevents payloads such as alert(1) and its URI-encoded form %3Cscript%3Ealert%281%29%3C%2Fscript%3E. However, %253Cscript%253Ealert%25281%2529%253C%252Fscript%253E, which is the double-URI-encoded form of alert(1), will bypass this filter. When double-URI-encoded payload %253Cscript%253Ealert%25281%2529%253C%252Fscript%253E is used, the value of $_GET["name"] will be %3Cscript%3Ealert%281%29%3C%2Fscript%3E which doesn't contain any illegal character and thus passes through the htmlentities function without any change and will be given to the urldecode function which returns alert(1), resulting in a successful attack. Sources References External links OWASP entry for double encoding attacks CAPEC entry for double encoding attacks CWE entry for the weakness exploited by double encoding attacks Web security exploits
Double encoding
Technology
1,994
1,158,040
https://en.wikipedia.org/wiki/Pingala
Acharya Pingala (; c. 3rd2nd century BCE) was an ancient Indian poet and mathematician, and the author of the (), also called the Pingala-sutras (), the earliest known treatise on Sanskrit prosody. The is a work of eight chapters in the late Sūtra style, not fully comprehensible without a commentary. It has been dated to the last few centuries BCE. In the 10th century CE, Halayudha wrote a commentary elaborating on the . According to some historians Maharshi Pingala was the brother of Pāṇini, the famous Sanskrit grammarian, considered the first descriptive linguist. Another think tank identifies him as Patanjali, the 2nd century CE scholar who authored Mahabhashya. Combinatorics The presents a formula to generate systematic enumerations of metres, of all possible combinations of light (laghu) and heavy (guru) syllables, for a word of n syllables, using a recursive formula, that results in a partially ordered binary representation. Pingala is credited with being the first to express the combinatorics of Sanskrit metre, eg. Create a syllable list x comprising one light (L) and heavy (G) syllable: Repeat till list x contains only words of the desired length n Replicate list x as lists a and b Append syllable L to each element of list a Append syllable G to each element of list b Append lists b to list a and rename as list x Because of this, Pingala is sometimes also credited with the first use of zero, as he used the Sanskrit word śūnya to explicitly refer to the number. Pingala's binary representation increases towards the right, and not to the left as modern binary numbers usually do. In Pingala's system, the numbers start from number one, and not zero. Four short syllables "0000" is the first pattern and corresponds to the value one. The numerical value is obtained by adding one to the sum of place values. Pingala's work also includes material related to the Fibonacci numbers, called . Editions A. Weber, Indische Studien 8, Leipzig, 1863. Janakinath Kabyatittha & brothers, ChhandaSutra-Pingala, Calcutta, 1931. Nirnayasagar Press, Chand Shastra, Bombay, 1938 Notes See also Chandas Sanskrit prosody Indian mathematics Indian mathematicians History of the binomial theorem List of Indian mathematicians References Amulya Kumar Bag, 'Binomial theorem in ancient India', Indian J. Hist. Sci. 1 (1966), 68–74. George Gheverghese Joseph (2000). The Crest of the Peacock, p. 254, 355. Princeton University Press. Klaus Mylius, Geschichte der altindischen Literatur, Wiesbaden (1983). External links Math for Poets and Drummers, Rachel W. Hall, Saint Joseph's University, 2005. Mathematics of Poetry, Rachel W. Hall Fibonacci numbers Ancient Indian mathematicians Ancient Sanskrit grammarians Indian Sanskrit scholars 2nd-century BC mathematicians
Pingala
Mathematics
642
5,347,069
https://en.wikipedia.org/wiki/Posaconazole
Posaconazole, sold under the brand name Noxafil among others, is a triazole antifungal medication. It was approved for medical use in the European Union in October 2005, and in the United States in September 2006. It is available as a generic medication. Medical uses Posaconazole is used to treat invasive Aspergillus and Candida infections. It is also used for the treatment of oropharyngeal candidiasis (OPC), including OPC refractory to itraconazole and/or fluconazole therapy. It is also used to treat invasive infections by Candida, Mucor, and Aspergillus species in severely immunocompromised patients. Clinical evidence for its utility in treatment of invasive disease caused by Fusarium species (fusariosis) is limited. It appears to be helpful in a mouse model of naegleriasis. Pharmacology Pharmacodynamics Posaconazole works by disrupting the close packing of acyl chains of phospholipids, impairing the functions of certain membrane-bound enzyme systems such as ATPase and enzymes of the electron transport system, thus inhibiting growth of the fungi. It does this by blocking the synthesis of ergosterol by inhibiting of the enzyme lanosterol 14α-demethylase and accumulation of methylated sterol precursors. Posaconazole is significantly more potent at inhibiting 14-alpha demethylase than itraconazole. Microbiology Posaconazole is active against the following microorganisms: Candida spp. Aspergillus spp. Zygomycetes spp. Pharmacokinetics Posaconazole is absorbed within three to five hours. It is predominantly eliminated through the liver, and has a half-life of about 35 hours. Oral administration of posaconazole taken with a high-fat meal exceeds 90% bioavailability and increases the concentration by four times compared to fasting state. References External links 1,2,4-Triazol-3-ones 27-Hydroxylase inhibitors CYP3A4 inhibitors Fluoroarenes Lanosterol 14α-demethylase inhibitors Drugs developed by Merck & Co. Orphan drugs Piperazines Phenylethanolamine ethers Drugs developed by Schering-Plough Secondary alcohols Tetrahydrofurans Triazole antifungals Ureas
Posaconazole
Chemistry,Biology
526
21,398,025
https://en.wikipedia.org/wiki/Cloud%20testing
Cloud testing is a form of software testing in which web applications use cloud computing environments (a "cloud") to simulate real-world user traffic. Steps Companies simulate real world Web users by using cloud testing services that are provided by cloud service vendors such as Advaltis, Compuware, HP, Keynote Systems, Neotys, RadView and SOASTA. Once user scenarios are developed and the test is designed, these service providers leverage cloud servers (provided by cloud platform vendors such as Amazon.com, Google, Rackspace, Microsoft, etc.) to generate web traffic that originates from around the world. Once the test is complete, the cloud service providers deliver results and analytics back to corporate IT professionals through real-time dashboards for a complete analysis of how their applications and the internet will perform during peak volumes. Applications Cloud testing is often seen as only performance or load tests, however, as discussed earlier it covers many other types of testing. Cloud computing itself is often referred to as the marriage of software as a service (SaaS) and utility computing. In regard to test execution, the software offered as a service may be a transaction generator and the cloud provider's infrastructure software, or may just be the latter. Distributed Systems and Parallel Systems mainly use this approach for testing, because of their inherent complex nature. D-Cloud is an example of such a software testing environment. Tools Leading cloud computing service providers include, among others, Amazon, Microsoft, Google, RadView, Skytap, HP and SOASTA. Benefits The ability and cost to simulate web traffic for software testing purposes has been an inhibitor to overall web reliability. The low cost and accessibility of the cloud's extremely large computing resources provides the ability to replicate real world usage of these systems by geographically distributed users, executing wide varieties of user scenarios, at scales previously unattainable in traditional testing environments. Minimal start-up time along with quality assurance can be achieved by cloud testing. Following are some of the key benefits: Reduction in capital expenditure Highly scalable References Cloud computing Software testing
Cloud testing
Engineering
422
61,747,300
https://en.wikipedia.org/wiki/2I/Borisov
2I/Borisov, originally designated C/2019 Q4 (Borisov), is the first observed rogue comet and the second observed interstellar interloper after ʻOumuamua. It was discovered by the Crimean amateur astronomer and telescope maker Gennadiy Borisov on 29 August 2019 UTC (30 August local time). 2I/Borisov has a heliocentric orbital eccentricity of 3.36 and is not bound to the Sun. The comet passed through the ecliptic of the Solar System at the end of October 2019, and made its closest approach to the Sun at just over on 8 December 2019. The comet passed closest to Earth on 28 December 2019. In November 2019, astronomers from Yale University said that the comet's tail was 14 times the size of Earth, and stated, "It's humbling to realize how small Earth is next to this visitor from another solar system." Nomenclature The comet is formally called "2I/Borisov" by the International Astronomical Union (IAU), with "2I" or "2I/2019 Q4" being its designation and "Borisov" being its name, but is sometimes referred to as "Comet Borisov", especially in the popular press. As the second observed interstellar interloper after 1I/ʻOumuamua, it was given the "2I" designation, where "I" stands for interstellar. The name Borisov follows the tradition of naming comets after their discoverers. Before final designation as 2I/Borisov, the object was referred to by other names: Early orbit solutions suggested that the comet could be a near-Earth object and was thus listed on IAU's Minor Planet Center's (MPC) Near-Earth Object Confirmation Page (NEOCP) as gb00234. Further refinements after thirteen days of observation made clear the object was a hyperbolic comet, and it was given the designation C/2019 Q4 (Borisov) by the Minor Planet Center on 11 September 2019. A number of other astronomers including Davide Farnocchia, Bill Gray, and David Tholen concluded that the comet was interstellar. On 24 September 2019 the IAU announced that the Working Group for Small Body Nomenclature kept the name Borisov giving the comet the interstellar designation of 2I/Borisov, formally announcing the comet was indeed interstellar. Characteristics Unlike ʻOumuamua, which had an asteroidal appearance, 2I/Borisov's nucleus was surrounded by a coma, a cloud of dust and gas. Size and shape Early estimates of nucleus 2I/Borisov diameter have ranged from . 2I/Borisov has, unlike Solar System comets, noticeably shrunk during Solar System flyby, losing at least 0.4% of its mass before perihelion. Also, the amplitude of non-gravitational acceleration place an upper limit of 0.4 km on nucleus size, consistent with a previous Hubble Space Telescope upper limit of 0.5 km. The comet did not come much closer to Earth than 300 million km, which prevents using radar to directly determine its size and shape. This could be done using the occultation of a star by 2I/Borisov but an occultation would be difficult to predict, requiring a precise determination of its orbit, and the detection would necessitate a network of small telescopes. Rotation A study using observations from Hubble could not find a variation in the light curve. According to this study the rotational period must be larger than 10 hours. A study with CSA's NEOSSat found a period of 13.2 ± 0.2 days, which is unlikely to be the nuclear spin. Monte Carlo simulations based on the available orbit determinations suggest that the equatorial obliquity of 2I/Borisov could be about 59 degrees or 90 degrees, the latter is favored for the latest orbit determination. Chemical makeup and nucleus structure David Jewitt and Jane Luu estimate from the size of its coma the comet is producing 2 kg/s of dust and is losing 60 kg/s of water. They extrapolate that it became active in June 2019 when it was between 4 and 5  from the Sun. A search of image archives found precovery observations of 2I/Borisov as early as 13 December 2018, but not on 21 November 2018, indicating it became active between these dates. 2I/Borisov's composition appears uncommon yet not unseen in Solar System comets, being relatively depleted in water and diatomic carbon (C2), but enriched in carbon monoxide and amines (R-NH2). The molar ratio of carbon monoxide to water in 2I/Borisov tail is 35–105%, resembling the unusual blue-tailed comet C/2016 R2 (PANSTARRS) in contrast to the average ratio of 4% for solar system comets. The 2I/Borisov has also produced a minor amount of neutral nickel emission attributed to an unknown volatile compound of nickel. The nickel to iron abundance ratio is similar to Solar System comets. Trajectory As seen from Earth, the comet was in the northern sky from September until mid-November. It crossed the ecliptic plane on 26 October near the star Regulus, and the celestial equator on 13 November 2019, entering the southern sky. On 8 December 2019, the comet reached perihelion (closest approach to the Sun) and was near the inner edge of the asteroid belt. In late December, it made its closest approach to Earth, 1.9 , and had a solar elongation of about 80°. Due to its 44° orbital inclination, 2I/Borisov did not make any notable close approaches to the planets. 2I/Borisov entered the Solar System from the direction of Cassiopeia near the border with Perseus. This direction indicates that it originates from the galactic plane, rather than from the galactic halo. It will leave the Solar System in the direction of Telescopium. In interstellar space, 2I/Borisov takes roughly years to travel a light-year relative to the Sun. 2I/Borisov's trajectory is extremely hyperbolic, having an orbital eccentricity of 3.36. This is much higher than the 300+ known weakly hyperbolic comets, with heliocentric eccentricities just over 1, and even ʻOumuamua with an eccentricity of 1.2. 2I/Borisov also has a hyperbolic excess velocity () of , much higher than what could be explained by perturbations, which could produce velocities when approaching an infinite distance from the Sun of less than a few km/s. These two parameters are important indicators of 2I/Borisov's interstellar origin. For comparison, the Voyager 1 spacecraft, which is leaving the Solar System, is traveling at . 2I/Borisov has a much larger eccentricity than ʻOumuamua due to its higher excess velocity and its significantly higher perihelion distance. At this larger distance, the Sun's gravity is less able to alter its path as it passes through the Solar System. Observation Discovery The comet was discovered on 30 August 2019 by amateur astronomer Gennadiy Borisov at his personal observatory MARGO in Nauchnyy, Crimea, using a 0.65 meter telescope he designed and built himself. The discovery has been compared to the discovery of Pluto by Clyde Tombaugh. Tombaugh was also an amateur astronomer who was building his own telescopes, although he discovered Pluto using Lowell Observatory's astrograph. At discovery, it was inbound from the Sun, from Earth, and had a solar elongation of 38°. Borisov described his discovery thus: 2I/Borisov's interstellar origin required a couple of weeks to confirm. Early orbital solutions based on initial observations included the possibility that the comet could be a near-Earth object 1.4 AU from the Sun in an elliptical orbit with an orbital period of less than 1 year. Later using 151 observations over 12 days, NASA Jet Propulsion Laboratory's Scout gave an eccentricity range of 2.9–4.5 . But with an observation arc of only 12 days, there was still some doubt that it was interstellar because the observations were at a low solar elongation, which could introduce biases in the data such as differential refraction. Using large non-gravitational forces on the highly eccentric orbit, a solution could be generated with an eccentricity of about 1, an Earth minimum orbit intersection distance (MOID) of , and a perihelion at 0.90 AU around 30 December 2019. However, based on available observations, the orbit could only be parabolic if non-gravitational forces (thrust due to outgassing) affected its orbit more than any previous comets. Eventually with more observations the orbit converged to the hyperbolic solution that indicated an interstellar origin and non-gravitational forces could not explain the motion. Observation The last observations were in July 2020, seven months after perihelion. Observation of 2I/Borisov was aided by the fact that the comet was detected while inbound towards the Solar System. ʻOumuamua had been discovered as it was leaving the system, and thus could only be observed for 80 days before it was out of range. Because of its closest approach occurring near traditional year-end holidays, and the capability to have extended observations, some astronomers have called 2I/Borisov a "Christmas comet". Observations using the Hubble Space Telescope began on 12 October, when the comet moved far enough from the Sun to be safely observed by the telescope. Hubble is less affected by the confounding effects of the coma than ground-based telescopes, which will allow it to study the rotational light curve of 2I/Borisov's nucleus. This should facilitate an estimate of its size and shape. Comet chemistry A preliminary (low-resolution) visible spectrum of 2I/Borisov was similar to typical Oort Cloud comets. Its color indexes also resemble the Solar System's long period comets. Emissions at indicated the presence of cyanide (formula CN), which is typically the first detected in Solar System comets including comet Halley. This was the first detection of gas emissions from an interstellar object. The non-detection of diatomic carbon had also been reported in October 2019, with the ratio C2 to CN being less than either 0.095 or 0.3 . The diatomic carbon was positively detected in November 2019, with measured C2 to CN ratio of  . This resembles a carbon-chain depleted group of comets, which are either Jupiter family comets or rare blue-colored carbon monoxide comets exemplified by C/2016 R2. By the end of November 2019, C2 production had dramatically increased, and C2 to CN ratio reached 0.61, along with appearance of bright amine (NH2) bands. Atomic oxygen has also been detected, from this observers estimated an outgassing of water at a rate similar to Solar System comets. Initially, neither water nor OH lines were directly detected in September 2019. First unambiguous detection of OH lines was done 1 November 2019, and OH production peaked in early December 2019. Suspected nucleus fragmentation The comet did come within about 2 AU of the Sun, a distance at which many small comets have been found to disintegrate. The probability that a comet disintegrates strongly depends on the size of its nucleus; Guzik et al. estimated a probability of 10% that this would happen to 2I/Borisov. Jewitt and Luu compared 2I/Borisov to C/2019 J2 (Palomar), another comet of similar size that disintegrated in May 2019 at a distance of 1.9 AU from the Sun. In the event that the nucleus disintegrates, as is sometimes seen with small comets, Hubble can be used to study the evolution of the disintegration process. The severe outburst in February–March 2020, led to suspected "ongoing nucleus fragmentation" from the comet by 12 March. Indeed, images from the Hubble Space Telescope taken on 30 March 2020 show a non-stellar core indicating that Comet 2I/Borisov has ejected sunward a large fragment. The ejection is estimated to have begun around 7 March, and may have occurred during one of the outbursts that occurred near that time. The ejected fragment appeared to have vanished by 6 April 2020. A followup study, reported on 6 April 2020, observed only a single object, and noted that the fragment component had now disappeared. Later analysis of the event showed the ejected dust and fragments have a combined mass of about 0.1% of total mass of nucleus, making the event a large outburst rather than fragmentation. Exploration The high hyperbolic excess velocity of 2I/Borisov of makes it hard for a spacecraft to reach the comet with existing technology: according to a team of the Initiative for Interstellar Studies, a 202 kg (445 lb) spacecraft could theoretically have been sent in July 2018 to intercept 2I/Borisov using a Falcon Heavy-class launcher, or 765 kg (1687 lb) on a Space Launch System (SLS)-class booster, but only if the object had been discovered much earlier than it was to meet the optimal launch date. Launches after the actual discovery date would eliminate the possibility to use Falcon Heavy-class rockets, requiring Oberth maneuvres near Jupiter and near the Sun and a larger launch vehicle. Even an SLS-class launcher would only have been able to deliver a payload (such as a CubeSat) into a trajectory that would intercept 2I/Borisov in 2045 at a relative speed of . According to congressional testimony, NASA may need at least five years of preparation to launch such an intercepting mission. See also ʻOumuamua - the first interstellar interloper discovered Blue (carbon monoxide rich) comets Comet Morehouse Comet Humason C/2016 R2 C/1980 E1 (Bowell) – the most eccentric comet known in the Solar System with an eccentricity of 1.057 List of Solar System objects by greatest aphelion Notes References External links Image of 2I/Borisov – comet is "14 times the size of Earth" (Yale University; November 2019) Image of 2I/Borisov from the Gemini Observatory, Hawaii Pictures of 2I/Borisov from Paris Observatory (LESIA) Discovery animation FAQ at ProjectPluto (Bill Gray) Extrasolar Planetary Encyclopedia – 2I/Borisov Hyperbolic orbit simulation – 2I/Borisov (Tony Dunn) Minor Planet Center MPEC 2019-T116 : COMET 2I/Borisov Magnitude plot by Seiichi Yoshida @ aerith.net (with predicted brightness) (MBusch/SShostak) – SETI (19 September 2019) – NASA (20 April 2020) Interactive 3D gravity simulation of Borisov's Solar System flyby 20190830 002I Discoveries by amateur astronomers Discoveries by the Crimean Astrophysical Observatory Hyperbolic comets Interstellar objects Milky Way
2I/Borisov
Astronomy
3,106
264,752
https://en.wikipedia.org/wiki/Baryonic%20dark%20matter
In astronomy and cosmology, baryonic dark matter is hypothetical dark matter composed of baryons. Only a small proportion of the dark matter in the universe is likely to be baryonic. Characteristics As "dark matter", baryonic dark matter is undetectable by its emitted radiation, but its presence can be inferred from gravitational effects on visible matter. This form of dark matter is composed of "baryons", heavy subatomic particles such as protons and neutrons and combinations of these, including non-emitting ordinary atoms. Presence Baryonic dark matter may occur in non-luminous gas or in Massive Astrophysical Compact Halo Objects (MACHOs) – condensed objects such as black holes, neutron stars, white dwarfs, very faint stars, or non-luminous objects like planets and brown dwarfs. Estimates of quantity The total amount of baryonic dark matter can be inferred from models of Big Bang nucleosynthesis, and observations of the cosmic microwave background. Both indicate that the amount of baryonic dark matter is much smaller than the total amount of dark matter. Big Bang nucleosynthesis From the perspective of Big Bang nucleosynthesis, a larger amount of ordinary (baryonic) matter implies a denser early universe, more efficient conversion of matter to helium-4, and less unburned deuterium remaining. If all of the dark matter in the universe were baryonic, then there would be much less deuterium in the universe than is observed. This could be resolved if more deuterium were somehow generated, but large efforts in the 1970s failed to identify plausible mechanisms for this to occur. For instance, MACHOs, which include, for example, brown dwarfs (bodies of hydrogen and helium with masses less than ), never begin nuclear fusion of hydrogen, but they do burn deuterium. Other possibilities that were examined include "Jupiters", which are similar to brown dwarfs but have masses and do not burn anything, and white dwarfs. See also Particle chauvinism References Dark matter Baryons
Baryonic dark matter
Physics,Astronomy
430
254,062
https://en.wikipedia.org/wiki/Mast%20cell
A mast cell (also known as a mastocyte or a labrocyte) is a resident cell of connective tissue that contains many granules rich in histamine and heparin. Specifically, it is a type of granulocyte derived from the myeloid stem cell that is a part of the immune and neuroimmune systems. Mast cells were discovered by Friedrich von Recklinghausen and later rediscovered by Paul Ehrlich in 1877. Although best known for their role in allergy and anaphylaxis, mast cells play an important protective role as well, being intimately involved in wound healing, angiogenesis, immune tolerance, defense against pathogens, and vascular permeability in brain tumors. The mast cell is very similar in both appearance and function to the basophil, another type of white blood cell. Although mast cells were once thought to be tissue-resident basophils, it has been shown that the two cells develop from different hematopoietic lineages and thus cannot be the same cells. Structure Mast cells are very similar to basophil granulocytes (a class of white blood cells) in blood, in the sense that both are granulated cells that contain histamine and heparin, an anticoagulant. Their nuclei differ in that the basophil nucleus is lobated while the mast cell nucleus is round. The Fc region of immunoglobulin E (IgE) becomes bound to mast cells and basophils, and when IgE's paratopes bind to an antigen, it causes the cells to release histamine and other inflammatory mediators. These similarities have led many to speculate that mast cells are basophils that have "homed in" on tissues. Furthermore, they share a common precursor in bone marrow expressing the CD34 molecule. Basophils leave the bone marrow already mature, whereas the mast cell circulates in an immature form, only maturing once in a tissue site. The site an immature mast cell settles in probably determines its precise characteristics. The first in vitro differentiation and growth of a pure population of mouse mast cells was carried out using conditioned medium derived from concanavalin A-stimulated splenocytes. Later, it was discovered that T cell-derived interleukin 3 was the component present in the conditioned media that was required for mast cell differentiation and growth. Mast cells in rodents are classically divided into two subtypes: connective tissue-type mast cells and mucosal mast cells. The activities of the latter are dependent on T-cells. Mast cells are present in most tissues characteristically surrounding blood vessels, nerves and lymphatic vessels, and are especially prominent near the boundaries between the outside world and the internal milieu, such as the skin, mucosa of the lungs, and digestive tract, as well as the mouth, conjunctiva, and nose. Function Mast cells play a key role in the inflammatory process. When activated, a mast cell can either selectively release (piecemeal degranulation) or rapidly release (anaphylactic degranulation) "mediators", or compounds that induce inflammation, from storage granules into the local microenvironment. Mast cells can be stimulated to degranulate by allergens through cross-linking with immunoglobulin E receptors (e.g., FcεRI), physical injury through pattern recognition receptors for damage-associated molecular patterns (DAMPs), microbial pathogens through pattern recognition receptors for pathogen-associated molecular patterns (PAMPs), and various compounds through their associated G-protein coupled receptors (e.g., morphine through opioid receptors) or ligand-gated ion channels. Complement proteins can activate membrane receptors on mast cells to exert various functions as well. Mast cells express a high-affinity receptor (FcεRI) for the Fc region of IgE, the least-abundant member of the antibodies. This receptor is of such high affinity that binding of IgE molecules is in essence irreversible. As a result, mast cells are coated with IgE, which is produced by plasma cells (the antibody-producing cells of the immune system). IgE antibodies are typically specific to one particular antigen. In allergic reactions, mast cells remain inactive until an allergen binds to IgE already coated upon the cell. Other membrane activation events can either prime mast cells for subsequent degranulation or act in synergy with FcεRI signal transduction. In general, allergens are proteins or polysaccharides. The allergen binds to the antigen-binding sites, which are situated on the variable regions of the IgE molecules bound to the mast cell surface. It appears that binding of two or more IgE molecules (cross-linking) is required to activate the mast cell. The clustering of the intracellular domains of the cell-bound Fc receptors, which are associated with the cross-linked IgE molecules, causes a complex sequence of reactions inside the mast cell that lead to its activation. Although this reaction is most well understood in terms of allergy, it appears to have evolved as a defense system against parasites and bacteria. Mast cells (MCs) have been shown to release their nuclear DNA and subsequently form mast cell extracellular traps (MCETs) comparable to neutrophil extracellular traps, which are able to entrap and kill various microbes. https://pmc.ncbi.nlm.nih.gov/articles/PMC4947581/ Mast cell mediators A unique, stimulus-specific set of mast cell mediators is released through degranulation following the activation of cell surface receptors on mast cells. Examples of mediators that are released into the extracellular environment during mast cell degranulation include: serine proteases, such as tryptase and chymase histamine (2–5 picograms per mast cell) serotonin proteoglycans, mainly heparin (active as anticoagulant) and some chondroitin sulfate proteoglycans adenosine triphosphate (ATP) lysosomal enzymes β-hexosaminidase β-glucuronidase arylsulfatases newly formed lipid mediators (eicosanoids): thromboxane prostaglandin D2 leukotriene C4 platelet-activating factor cytokines TNF-α basic fibroblast growth factor interleukin-4 stem cell factor chemokines, such as eosinophil chemotactic factor reactive oxygen species Histamine dilates post-capillary venules, activates the endothelium, and increases blood vessel permeability. This leads to local edema (swelling), warmth, redness, and the attraction of other inflammatory cells to the site of release. It also depolarizes nerve endings (leading to itching or pain). Cutaneous signs of histamine release are the "flare and wheal"-reaction. The bump and redness immediately following a mosquito bite are a good example of this reaction, which occurs seconds after challenge of the mast cell by an allergen. The other physiologic activities of mast cells are much less-understood. Several lines of evidence suggest that mast cells may have a fairly fundamental role in innate immunity: They are capable of elaborating a vast array of important cytokines and other inflammatory mediators such as TNF-α; they express multiple "pattern recognition receptors" thought to be involved in recognizing broad classes of pathogens; and mice without mast cells seem to be much more susceptible to a variety of infections. Mast cell granules carry a variety of bioactive chemicals. These granules have been found to be transferred to adjacent cells of the immune system and neurons in a process of transgranulation via mast cell pseudopodia. In the nervous system Unlike other hematopoietic cells of the immune system, mast cells naturally occur in the human brain where they interact with the neuroimmune system. In the brain, mast cells are located in a number of structures that mediate visceral sensory (e.g. pain) or neuroendocrine functions or that are located along the blood–cerebrospinal fluid barrier, including the pituitary stalk, pineal gland, thalamus, and hypothalamus, area postrema, choroid plexus, and in the dural layer of the meninges near meningeal nociceptors. Mast cells serve the same general functions in the body and central nervous system, such as effecting or regulating allergic responses, innate and adaptive immunity, autoimmunity, and inflammation. Across systems, mast cells serve as the main effector cell through which pathogens can affect the gut–brain axis. In the gut In the gastrointestinal tract, mucosal mast cells are located in close proximity to sensory nerve fibres, which communicate bidirectionally. When these mast cells initially degranulate, they release mediators (e.g., histamine, tryptase, and serotonin) which activate, sensitize, and upregulate membrane expression of nociceptors (i.e., TRPV1) on visceral afferent neurons via their receptors (respectively, HRH1, HRH2, HRH3, PAR2, 5-HT3); in turn, neurogenic inflammation, visceral hypersensitivity, and intestinal dysmotility (i.e., impaired peristalsis) result. Neuronal activation induces neuropeptide (substance P and calcitonin gene-related peptide) signaling to mast cells where they bind to their associated receptors and trigger degranulation of a distinct set of mediators (β-Hexosaminidase, cytokines, chemokines, PGD2, leukotrienes, and eoxins). Physiology Structure of the high-affinity IgE receptor, FcεR1 FcεR1 is a high affinity IgE-receptor that is expressed on the surface of the mast cell. FcεR1 is a tetramer made of one alpha (α) chain, one beta (β) chain, and two identical, disulfide-linked gamma (γ) chains. The binding site for IgE is formed by the extracellular portion of the α chain that contains two domains that are similar to Ig. One transmembrane domain contains an aspartic acid residue, and one contains a short cytoplasmic tail. The β chain contains, a single immunoreceptor tyrosine-based activation motif ITAM, in the cytoplasmic region. Each γ chain has one ITAM on the cytoplasmic region. The signaling cascade from the receptor is initiated when the ITAMs of the β and γ chains are phosphorylated by a tyrosine kinase. This signal is required for the activation of mast cells. Type 2 helper T cells,(Th2) and many other cell types lack the β chain, so signaling is mediated only by the γ chain. This is due to the α chain containing endoplasmic reticulum retention signals that causes the α-chains to remain degraded in the ER. The assembly of the α chain with the co-transfected β and γ chains mask the ER retention and allows the α β γ complex to be exported to the golgi apparatus to the plasma membrane in rats. In humans, only the γ complex is needed to counterbalance the α chain ER retention. Allergen process Allergen-mediated FcεR1 cross-linking signals are very similar to the signaling event resulting in antigen binding to lymphocytes. The Lyn tyrosine kinase is associated with the cytoplasmic end of the FcεR1 β chain. The antigen cross-links the FcεR1 molecules, and Lyn tyrosine kinase phosphorylates the ITAMs in the FcεR1 β and γ chain in the cytoplasm. Upon the phosphorylation, the Syk tyrosine kinase gets recruited to the ITAMs located on the γ chains. This causes activation of the Syk tyrosine kinase, causing it to phosphorylate. Syk functions as a signal amplifying kinase activity due to the fact that it targets multiple proteins and causes their activation. This antigen stimulated phosphorylation causes the activation of other proteins in the FcεR1-mediated signaling cascade. Degranulation and fusion An important adaptor protein activated by the Syk phosphorylation step is the linker for activation of T cells (LAT). LAT can be modified by phosphorylation to create novel binding sites. Phospholipase C gamma (PLCγ) becomes phosphorylated once bound to LAT, and is then used to catalyze phosphatidylinositol bisphosphate breakdown to yield inositol trisphosphate (IP3) and diacyglycerol (DAG). IP3 elevates calcium levels, and DAG activates protein kinase C (PKC). This is not the only way that PKC is made. The tyrosine kinase FYN phosphorylates Grb2-associated-binding protein 2 (Gab2), which binds to phosphoinositide 3-kinase, which activates PKC. PKC leads to the activation of myosin light-chain phosphorylation granule movements, which disassembles the actin–myosin complexes to allow granules to come into contact with the plasma membrane. The mast cell granule can now fuse with the plasma membrane. Soluble N-ethylmaleimide sensitive fusion attachment protein receptor SNARE complex mediates this process. Different SNARE proteins interact to form different complexes that catalyze fusion. Rab3 guanosine triphosphatases and Rab-associated kinases and phosphatases regulate granule membrane fusion in resting mast cells. MRGPRX2 mast cell receptor Human mast-cell-specific G-protein-coupled receptor MRGPRX2 plays a key role in the recognition of pathogen associated molecular patterns (PAMPs) and initiating an antibacterial response. MRGPRX2 is able to bind to competence stimulating peptide (CSP) 1 - a quorum sensing molecule (QSM) produced by Gram-positive bacteria. This leads to signal transduction to a G protein and activation of the mast cell. Mast cell activation induces the release of antibacterial mediators including ROS, TNF-α and PRGD2 which institute the recruitment of other immune cells to inhibit bacterial growth and biofilm formation. The MRGPRX2 receptor is a possible therapeutic target and can be pharmacologically activated using the agonist compound 48/80 to control bacterial infection. It is also hypothesised that other QSMs and even Gram-negative bacterial signals can activate this receptor. This might particularly be the case during Bartonella chronic infections where it appears clearly in human symptomatology that these patients all have a mast cell activation syndrome due to the presence of a not yet defined quorum sensing molecule (basal histamine itself?). Those patients are prone to food intolerance driven by another less specific path than the IgE receptor path: certainly the MRGPRX2 route. These patients also show cyclical skin pathergy and dermographism, every time the bacteria exits its hidden intracellular location. Enzymes Clinical significance Parasitic infections Mast cells are activated in response to infection by pathogenic parasites, such as certain helminths and protozoa, through IgE signaling. Various species known to be affected include T.spiralis, S.ratti, and S.venezuelensis. This is accomplished via Type 2 cell-mediated effector immunity, which is characterized by signaling from IL-4, IL-5, and IL-13. It is the same immune response that is responsible for allergic inflammation more generally, and includes effectors beyond mast cells. In this response, mast cells are known to release significant quantities of IL-4 and IL-13 along with mast cell chymase 1 (CMA1), which is considered to help expel some worms by increasing vascular permeability. Mast cell activation disorders Mast cell activation disorders (MCAD) are a spectrum of immune disorders that are unrelated to pathogenic infection and involve similar symptoms that arise from secreted mast cell intermediates, but differ slightly in their pathophysiology, treatment approach, and distinguishing symptoms. The classification of mast cell activation disorders was laid out in 2010. Allergic disease Allergies are mediated through IgE signaling which triggers mast cell degranulation. Recently, IgE-independent "pseudo-allergic" reactions are thought to also be mediated via the MRGPRX2 receptor activation of mast cells (e.g. drugs such as muscle relaxants, opioids, Icatibant and fluoroquinolones). Many forms of cutaneous and mucosal allergy are mediated in large part by mast cells; they play a central role in asthma, eczema, itch (from various causes), allergic rhinitis and allergic conjunctivitis. Antihistamine drugs act by blocking histamine action on nerve endings. Cromoglicate-based drugs (sodium cromoglicate, nedocromil) block a calcium channel essential for mast cell degranulation, stabilizing the cell and preventing release of histamine and related mediators. Leukotriene antagonists (such as montelukast and zafirlukast) block the action of leukotriene mediators and are being used increasingly in allergic diseases. Calcium triggers the secretion of histamine from mast cells after previous exposure to sodium fluoride. The secretory process can be divided into a fluoride-activation step and a calcium-induced secretory step. It was observed that the fluoride-activation step is accompanied by an elevation of cyclic adenosine monophosphate (cAMP) levels within the cells. The attained high levels of cAMP persist during histamine release. It was further found that catecholamines do not markedly alter the fluoride-induced histamine release. It was also confirmed that the second, but not the first, step in sodium fluoride-induced histamine secretion is inhibited by theophylline. Vasodilation and increased permeability of capillaries are a result of both H1 and H2 receptor types. Stimulation of histamine activates a histamine (H2)-sensitive adenylate cyclase of oxyntic cells, and there is a rapid increase in cellular [cAMP] that is involved in activation of H+ transport and other associated changes of oxyntic cells. Anaphylaxis In anaphylaxis (a severe systemic reaction to allergens, such as nuts, bee stings, or drugs), the body-wide degranulation of mast cells leads to vasodilation and, if severe, symptoms of life-threatening shock. Products released from these granules include histamine, serotonin, heparin, chondroitin sulphate, tryptase, chymase, carboxypeptidase, and TNF-α. These can vary in their quantities and proportions between individuals, which may explain some of the differences in symptoms seen across patients. Histamine is a vasodilatory substance released during anaphylaxis. Autoimmunity Mast cells may be implicated in the pathology associated with autoimmune, inflammatory disorders of the joints. They have been shown to be involved in the recruitment of inflammatory cells to the joints (e.g., rheumatoid arthritis) and skin (e.g., bullous pemphigoid), and this activity is dependent on antibodies and complement components. Mastocytosis and clonal disorders Mastocytosis is a rare clonal mast cell disorder involving the presence of too many mast cells (mastocytes) and CD34+ mast cell precursors. Mutations in c-Kit are associated with mastocytosis. More specifically, the majority (>80%) of patients with mastocytosis have a mutation at codon 816 in the kinase domain of KIT, known as the KIT D816V mutation. This mutation, as well as expression of either CD2 or CD25 (confirmed by immunostaining or flow cytometry), are characteristic of primary clonal/monoclonal mast cell activation syndrome (CMCAS/MMAS). The most commonly affected organs in mastocytosis are the skin and bone marrow. Monoclonal disorders Neoplastic disorders Mastocytomas, or mast cell tumors, can secrete excessive quantities of degranulation products. They are often seen in dogs and cats. Other neoplastic disorders associated with mast cells include mast cell sarcoma and mast cell leukemia. Mast cell activation syndrome Mast cell activation syndrome (MCAS) is an idiopathic immune disorder that involves recurrent and excessive mast cell degranulation and which produces symptoms that are similar to other mast cell activation disorders. The syndrome is diagnosed based upon four sets of criteria involving treatment response, symptoms, a differential diagnosis, and biomarkers of mast cell degranulation. History Mast cells were first described by Paul Ehrlich in his 1878 doctoral thesis on the basis of their unique staining characteristics and large granules. These granules also led him to the incorrect belief that they existed to nourish the surrounding tissue, so he named them Mastzellen (, as of animals). They are now considered to be part of the immune system. Research Autism Research into an immunological contribution to autism suggests that autism spectrum disorder (ASD) children may present with "allergic-like" problems in the absence of elevated serum IgE and chronic urticaria, suggesting non-allergic mast cell activation in response to environmental and stress triggers. This mast cell activation could contribute to brain inflammation and neurodevelopmental problems. Histological staining Toluidine blue: one of the most common stains for acid mucopolysaccharides and glycoaminoglycans, components of mast cells granules. Bismarck brown: stains mast cell granules brown. Surface markers: cell surface markers of mast cells were discussed in detail by Heneberg, claiming that mast cells may be inadvertently included in the stem or progenitor cell isolates, since part of them is positive for the CD34 antigen. The classical mast cell markers include the high-affinity IgE receptor, CD117 (c-Kit), and CD203c (for most of the mast cell populations). Expression of some molecules may change in course of the mast cell activation. Heterogeneity Mast cell heterogeneity significantly impacts the efficacy of mast cell stabilizing drugs disodium cromoglycate and ketotifen in preventing mediator release. In experiments, ketotifen inhibits mast cells from lung and tonsillar tissues when stimulated via an IgE-dependent histamine release mechanism, while disodium cromoglycate is less effective but still inhibited these mast cells. However, both agents fail to inhibit mediator release from skin mast cells, indicating that these cells are unresponsive to these stabilizers. Such differences in mast cell activation suggests the existence of different mast cell types across various tissuesa topic of ongoing research. Other organisms Mast cells and enterochromaffin cells are the source of most serotonin in the stomach in rodents. See also Allergy Diamine oxidase Food intolerance Granulocyte Histamine intolerance Histamine N-methyltransferase or HNMT Histamine List of distinct cell types in the adult human body Mast cell activation syndrome References External links Cell biology Connective tissue cells Granulocytes Human cells
Mast cell
Biology
5,101
76,632,501
https://en.wikipedia.org/wiki/Getis%E2%80%93Ord%20statistics
Getis–Ord statistics, also known as Gi*, are used in spatial analysis to measure the local and global spatial autocorrelation. Developed by statisticians Arthur Getis and J. Keith Ord they are commonly used for Hot Spot Analysis to identify where features with high or low values are spatially clustered in a statistically significant way. Getis-Ord statistics are available in a number of software libraries such as CrimeStat, GeoDa, ArcGIS, PySAL and R. Local statistics There are two different versions of the statistic, depending on whether the data point at the target location is included or not Here is the value observed at the spatial site and is the spatial weight matrix which constrains which sites are connected to one another. For the denominator is constant across all observations. A value larger (or smaller) than the mean suggests a hot (or cold) spot corresponding to a high-high (or low-low) cluster. Statistical significance can be estimated using analytical approximations as in the original work however in practice permutation testing is used to obtain more reliable estimates of significance for statistical inference. Global statistics The Getis-Ord statistics of overall spatial association are The local and global statistics are related through the weighted average The relationship of the and statistics is more complicated due to the dependence of the denominator of on . Relation to Moran's I Moran's I is another commonly used measure of spatial association defined by where is the number of spatial sites and is the sum of the entries in the spatial weight matrix. Getis and Ord show that Where , , and . They are equal if is constant, but not in general. Ord and Getis also show that Moran's I can be written in terms of where , is the standard deviation of and is an estimate of the variance of . See also Spatial analysis Indicators of spatial association Tobler's first law of geography Moran's I Geary's C Spatial analysis Covariance and correlation References
Getis–Ord statistics
Physics
412
20,187,427
https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20cyclooxygenase%202%20inhibitors
Cyclooxygenases are enzymes that take part in a complex biosynthetic cascade that results in the conversion of polyunsaturated fatty acids to prostaglandins and thromboxane(s). Their main role is to catalyze the transformation of arachidonic acid into the intermediate prostaglandin H2, which is the precursor of a variety of prostanoids with diverse and potent biological actions. Cyclooxygenases have two main isoforms that are called COX-1 and COX-2 (as well as a COX-3). COX-1 is responsible for the synthesis of prostaglandin and thromboxane in many types of cells, including the gastro-intestinal tract and blood platelets. COX-2 plays a major role in prostaglandin biosynthesis in inflammatory cells and in the central nervous system. Prostaglandin synthesis in these sites is a key factor in the development of inflammation and hyperalgesia. COX-2 inhibitors have analgesic and anti-inflammatory activity by blocking the transformation of arachidonic acid into prostaglandin H2 selectively. The rise for development of selective COX-2 inhibitors The impetus for development of selective COX-2 inhibitors was the adverse gastrointestinal side-effects of NSAIDs. Soon after the discovery of the mechanism of action of NSAIDs, strong indications emerged for alternative forms of COX, but little supporting evidence was found. COX enzyme proved to be difficult to purify and was not sequenced until 1988. In 1991 the existence of the COX-2 enzyme was confirmed by being cloned by Dr. Dan Simmons at Brigham Young University. Before the confirmation of COX-2 existence, the Dupont company had developed a compound, DuP-697, that was potent in many anti-inflammatory assays but did not have the ulcerogenic effects of NSAIDs. Once the COX-2 enzyme was identified, Dup-697 became the building-block for synthesis of COX-2 inhibitors. Celecoxib and rofecoxib, the first COX-2 inhibitors to reach market, were based on DuP-697. It took less than eight years to develop and market the first COX-2 inhibitor, with Celebrex (celecoxib) launched in December 1998 and Vioxx (rofecoxib) launched in May 1999. Celecoxib and other COX-2 selective inhibitors, valdecoxib, parecoxib, and mavacoxib, were discovered by a team at the Searle division of Monsanto led by John Talley. Development of COX-2 inhibitors Early studies showed that, when inflammation is induced, the affected organ unexpectedly develops an enormous capacity to generate prostaglandins. It was demonstrated that the increase is due to de novo synthesis of fresh enzyme. In 1991, during the investigation of the expression of early-response genes in fibroblasts transformed with Rous sarcoma virus, a novel mRNA transcript that was similar, but not identical, to the seminal COX enzyme was identified. It was suggested that an isoenzyme of COX had been discovered. Another group discovered a novel cDNA species encoding a protein with similar structure to COX-1 while studying phorbol-ester-induced genes in Swiss 3T3 cells. The same laboratory showed that this gene truly expressed a novel COX enzyme. The two enzymes were renamed COX-1, referring to the original enzyme and COX-2. Building on those results, scientists started focusing on selective COX-2 inhibitors. Enormous effort was spent on the development of NSAIDs between the 1960s and 1980 so there were numerous pharmacophores to test when COX-2 was discovered. Early efforts focused on modification on two lead compounds, DuP-697 and NS-398. These compounds differ greatly from NSAIDs that are arylalkonic acid analogs. Encouraged by the "concept testing" experiments with selective inhibitors, and armed with several solid leads and clear idea of the nature of the binding site, development of this field was rapid. In vitro recombinant enzyme assays provided powerful means for assessing COX selectivity and potency and led to the discovery and clinical development of the first rationally designed COX-2 selective inhibitor, celecoxib. Efforts have been made to convert NSAIDs into selective COX-2 inhibitors such as indometacin by lengthening of the alkylcarboxylic acid side-chain, but none have been marketed. Structure Activity Relationship (SAR) DuP-697 was a building-block for synthesis of COX-2 inhibitors and served as the basic chemical model for the coxibs that are the only selective COX-2 inhibitors on the market today. DuP-697 is a diaryl heterocycle with cis-stilbene moiety. Structure activity relationship (SAR) studies for diaryl heterocyclic compounds have indicated that a cis-stilbene moiety and changes in the para-position of one of the aryl rings play an important role in COX-2 selectivity. Celecoxib and parecoxib have a sulfonamide substituent (SO2NH2) in para-position on one of the aryl rings while etoricoxib and rofecoxib have a methylsulfone (SO2CH3). The oxidation state on the sulfur is important for selectivity; sulfones and sulfonamides are selective for COX-2 but sulfoxides and sulfides are not. The ring system that is fused in this stilbene system has been extensively manipulated to include every imaginable heterocyclic and carbocyclic skeleton of varying ring sizes. It is known that a SO2NHCOCH3 moiety as in parecoxib, which is a prodrug for valdecoxib, is 105 – 106 more reactive acetylating agent of enzyme serine hydroxyl groups than simple amides. Due to the fact that varying kinetic mechanisms affect potency for COX-1 versus COX-2, relying Potency and selectivity in human whole blood is used by many groups and has been accepted as a standard assessment of COX-2 potency and selectivity. The relationship between amino acid profile of COX-2 enzyme and inhibition mechanism One of the keys to developing COX-2 selective drugs is the larger active site of COX-2, which makes it possible to make molecules too large to fit into the COX-1 active site but still able to fit the COX-2. The larger active site of COX-2 is partly due to a polar hydrophilic side-pocket that forms because of substitution of Ile523, His513, and Ile434 in COX-1 by Val523, Arg513, and Val434 in COX-2. Val523 is less bulky than Ile523, which increases the volume of the active site. Substitution of Ile434 for Val434 allows the side-chain of Phe518 to move back and make some extra space. This side-pocket allows for interactions with Arg513, which is a replacement for His513 of COX-1. Arg513 is thought to be a key residue for diaryl heterocycle inhibitors such as the coxibs. The side-chain of Leu384, at the top of the receptor channel, is oriented into the active site of COX-1, but, in COX-2, it is oriented away from the active site and makes more space in the apex of the binding site. The bulky sulfonamide group in COX-2 inhibitors such as celecoxib and rofecoxib prevent the molecule from entering the COX-1 channel. For optimal activity and selectivity of the coxibs, a 4-methylsulfonylphenyl attached to an unsaturated (usually) five-membered ring with a vicinal lipophilic group is required (rofecoxib). The SO2CH3 can be replaced by SO2NH2, wherein the lipophilic pocket is occupied by an optionally substituted phenyl ring or a bulky alkoxy substituent (celecoxib). Within the hydrophilic side-pocket of COX-2, the oxygen of the sulfonamide (or sulfone) group interacts with Hist90, Arg513, and Gln192 and forms hydrogen bonds. The substituted phenyl group at the top of the channel interacts with the side-chains of amino acid residues through hydrophobic and electrostatic interactions. Tyr385 makes for some sterical restrictions of this side of the binding site so a small substituent of the phenyl group makes for better binding. Degrees of freedom are also important for the binding. The central ring of the coxibs decides the orientation of the aromatic rings and, therefore, the binding to COX enzyme even though it often has no electrostatic interactions with any of the amino acid residues. The high lipophilicity of the active site does require low polarity of the central scaffold of the coxibs. Mechanism of binding Studies on the binding mechanism of selective COX-2 inhibitors show that they have two reversible steps with both COX-1 and COX-2, but the selectivity for COX-2 is due to another step that is slow and irreversible and is seen only in the inhibition of COX-2, not COX-1. The irreversible step has been attributed to the presence of the sulfonamide (or sulfone) that fits into the side-pocket of COX-2. This has been studied using SC-58125 (an analogue of celecoxib) and mutated COX-2, wherein the valine 523 residue was replaced by isoleucine 523. The irreversible inhibition did not happen, but reversible inhibition was noticed. A model has been made to explain this three-step mechanism behind the inhibitory effects of selective COX-2 inhibitors. The first step accounts for the contact of the inhibitor with the gate of the hydrophobic channel (called the lobby region). The second step could account for the movement of the inhibitor from the lobby region to the active site of the COX enzyme. The last step probably represents repositioning of the inhibitor at the active site, which leads to strong interactions of the phenylsulfonamide or phenylsulfone group of the inhibitor and the amino acids of the side pocket. It is directly inhibition to postaglanding Pharmacokinetics of coxibs The coxibs are widely distributed throughout the body. All of the coxibs achieve sufficient brain concentrations to have a central analgesic effect, and all reduce prostaglandin formation in inflamed joints. All are well absorbed, but peak concentration may differ between the coxibs. The coxibs are highly protein-bound, and the published estimate of half-lives is variable between the coxibs. Celecoxib Celecoxib was the first specific inhibitor of COX-2 approved to treat patients with rheumatism and osteoarthritis. A study showed that the absorption rate, when given orally, is moderate, and peak plasma concentration occurs after about 2–4 hours. However, the extent of absorption is not well known. Celecoxib has the affinity to bind extensively to plasma proteins, especially to plasma albumin. It has an apparent volume of distribution (VD) of 455 +/- 166 L in humans and the area under the plasma concentration-time curve (AUC) increases proportionally to increased oral doses, between 100 and 800 mg. Celecoxib is metabolized primarily by CYP2C9 isoenzyme to carboxylic acid and also by non-CYP-dependent glucuronidation to glucuronide metabolites. The metabolites are excreted in urine and feces, with a small proportion of unchanged drug (2%) in the urine. Its elimination half-life is about 11 hours (6–12 hours) in healthy individuals, but racial differences in drug disposition and pharmacokinetic changes in the elderly have been reported. People with chronic kidney disease appear to have 43% lower plasma concentration compared to healthy individuals, with a 47% increase in apparent clearance, and it can be expected that patients with mild to moderate hepatic impairment have increased steady-state AUC. Parecoxib and valdecoxib Parecoxib sodium is a water-soluble inactive ester amide prodrug of valdecoxib, a novel second-generation COX-2-specific inhibitor and the first such agent to be developed for injectable use. It is rapidly converted by hepatic enzymatic hydrolysis to the active form valdecoxib. The compound then undergoes another conversion, which involves both cytochrome P450-mediated pathway (CYP2C9, CYP3A4) and non-cytochrome P450-mediated pathway, to hydroxylated metabolite and glucuronide metabolite. The hydroxylated metabolite, that also has weak COX-2-specific inhibitory properties, is then further metabolized by non-cytochrome P450 pathway to a glucuronide metabolite. These metabolites are excreted in the urine. After intra-muscular administration of Parecoxib sodium peak plasma concentration is reached within 15 minutes. The plasma concentration decreases rapidly after administration because of a rather short serum half-life, which is about 15–52 minutes. This can be explained by the rapid formation of Valdecoxib. In contrast to the rapid clearance of Parecoxib, plasma concentration of Valdecoxib declines slowly because of a longer half-life. On the other hand, when Valdecoxib is taken orally it is absorbed rapidly (1–2 hours), but presence of food can delay peak serum concentration. It then undergoes the same metabolism that is described above. It is extensively protein-bound (98%), and the plasma half-life is about 7–8 hours. Note that the half-life can be significantly prolonged in the elderly or those with hepatic impairment, and can lead to drug accumulation. The hydroxyl metabolite reaches its highest mean plasma concentration within 3 to 4 hours from administration, but it is considerably lower than of Valdecoxib or about 1/10 of the plasma levels of Valdecoxib. Etoricoxib Etoricoxib, that is used for patients with chronic arthropathies and musculoskeletal and dental pain, is absorbed moderately when given orally. A study on its pharmacokinetics showed that the plasma peak concentration of etoricoxib occurs after approximately 1 hour. It has shown to be extensively bound to plasma albumin (about 90%), and has an apparent volume of distribution (VD) of 120 L in humans. The area under the plasma concentration-time curve (AUC) increases in proportion to increased dosage (5–120 mg). The elimination half-life is about 20 hours in healthy individuals, and such long half-life enables the choice to have once-daily dosage. Etoricoxib, like the other coxibs, is excreted in urine and feces and also metabolized in likewise manner. CYP3A4 is mostly responsible for biotransformation of etoricoxib to carboxylic acid metabolite, but a non CYP450 metabolism pathway to glucuronide metabolite is also at hand. A very small portion of etoricoxib (<1%) is eliminated unchanged in the urine. Patients with chronic kidney disease do not appear to have different plasma concentration curve (AUC) compared to healthy individuals. It has though been reported that patients with moderate hepatic impairment have increased plasma concentration curve (AUC) by approximately 40%. It has been stated that further study is necessary to describe precisely the relevance of pharmacokinetic properties in terms of the clinical benefits and risks of etoricoxib compared to other clinical options. Lumiracoxib Lumiracoxib is unique amongst the coxibs in being a weak acid. It was developed for the treatment of osteoarthritis, rheumatoid arthritis and acute pain. The acidic nature of lumiracoxib allows it to penetrate well into areas of inflammation. It has shown to be rapidly and well absorbed, with peak plasma concentration occurring in about 1–3 hours. A study showed that when a subject was given 400 mg dose, the amount of unchanged drug in the plasma 2.5 hours postdose suggest a modest first pass effect. The terminal half-life in plasma ranged from 5.4 to 8.6 hours (mean =6.5 hours). The half-life in synovial fluid is considerably longer than in plasma, and the concentration in synovial fluid 24 hours after administration would be expected to result in a substantial COX-2 inhibition. This fact can explain why some users may suffice with once-daily dosage despite a short plasma half-life. The major plasma metabolites are 5-carboxy, 4’-hydroxy, and 4’-hydroxy-5-carboxy derivatives. Lumiracoxib is extensively metabolized before it is excreted, and the excretion routes are in the urine or feces. Peak plasma concentrations exceed those necessary to maximally inhibit COX-2, and that is consistent with a longer pharmacodynamic half-life. In vitro lumiracoxib has demonstrated a greater COX-2 selectivity than any of the other coxibs. Rofecoxib Rofecoxib was the second selective COX-2 inhibitor to be marketed, and the first one to be taken off the market. When the pharmacokinetics were studied in healthy human subjects, the peak concentration was achieved in 9 hours with effective half-life of approximately 17 hours. A secondary peak has been observed, which might suggest that the absorption of rofecoxib varies with intestinal motility, hence leading to high variability in time until peak concentration is met. Seventy-one and a half percent of the dose was recovered in urine (less than 1% unmetabolised) and 14.2% was recovered in feces (approximately 1.8% in the bile). Among the metabolites were rofecoxib-3’,4’-dihydrodiol, 4’-hydroxyrofecoxib-O-β-D-glucuronide, 5-hydroxyrofecoxib-O-β-D-glucuronide, 5-hydroxyrofecoxib, rofecoxib-erythro-3,4-dihydrohydroxy acid, rofecoxib-threo-3,4-dihydrohydroxy acid, cis-3,4-dihydrorofecoxib and trans-3,4-dihydrorofecoxib. Cardiovascular events associated with selective COX-2 inhibitors Even before the first selective COX-2 inhibitor was marketed, specialists began to suspect that there might be a cardiovascular risk associated with this class of medicines. In the VIGOR study (Vioxx Gastrointestinal Outcomes Research), rofecoxib (Vioxx) was compared to naproxen. After a short time, it became evident that there was a fivefold higher risk of myocardial infarction in the rofecoxib group compared to the group that received naproxen. The authors suggested that the difference was due to the cardioprotective effects of naproxen. The APPROVe (Adenomatous Poly Prevention on Vioxx) study was a multicentre, randomized, placebo-controlled, double blind trial aimed to assess the effect of three-year treatment with rofecoxib on recurrence of neoplastic polyps in individuals with a history of colorectal adenomas. In 2000 and 2001, 2587 patients with a history of colorectal adenomas were recruited and followed. The trial was stopped early (2 months before expected completion) on recommendations of its data safety and monitoring board because of concerns about cardiovascular toxicity. When looking at the results of the study, it showed a statistically significant increase in cardiovascular risk when taking rofecoxib compared to placebo beginning after 18 months of treatment. Then on 30 September Merck gave out a news release announcing their voluntary worldwide withdrawal of Vioxx. Some studies of other coxibs have also shown increase in the risk of cardiovascular events, while others have not. For instance, the Adenoma Prevention with Celecoxib study (APC) showed a dose-related increase in risk of cardiovascular death, myocardial infarction, stroke, or heart failure when taking celecoxib compared to placebo; and the Successive Celecoxib Efficacy and Safety Study I (SUCCESS-I) showed increased risk of myocardial infarction when taking 100 mg twice a day of celecoxib compared to diclofenac and naproxen; but taking 200 mg twice a day had lower incidence of myocardial infarction compared to diclofenac and naproxen. Nussmeier et al. (2005) showed in a study increase in incidence of cardiovascular events when taking parecoxib and valdecoxib (compared to placebo) after coronary artery bypass surgery. Possible mechanisms It has been proposed that COX-2 selectivity could cause imbalance of prostaglandins in the vasculature. If this were the explanation for the increased cardiovascular risk then low-dose aspirin should negate this effect, which was not the case in the APPROVe trial. Also, the non-selective COX inhibitors, have also shown increase in cardiovascular events. Another possible explanation was studied by Li H. et al. (2008). They showed that in spontaneously hypertensive rats (SHR) non-selective NSAIDs and the coxibs produce oxidative stress, indicated by enhanced vascular superoxide(O2−) content and elevated peroxide in plasma, which is in tune with enhanced expression of NADPH oxidase, which was noticed with use of diclofenac and naproxen and, to a lesser degree, rofecoxib and celecoxib. Nitrite in plasma was also decreased suggesting a diminished synthesis of vascular nitric oxide (NO). This decrease in NO synthesis did not result from decreased expression of endothelial nitric oxide synthase (eNOS) because expression of eNOS mRNA was not reduced, and even upregulated for some products. The decrease in NO synthesis could, rather, be explained by loss of eNOS function. For eNOS to be normally functional, it needs to form a dimer and to have its cofactor BH4, which is one of the most potent naturally occurring reducing agents. BH4 is sensitive to oxidation by peroxynitrite (ONOO−), which is produced when NO reacts with O2−, so it has been hypothesized that depletion of BH4 can occur with excessive oxidative stress (that can be caused by NSAIDs) and, hence, be the cause of eNOS dysfunction. This dysfunction, which is referred to as eNOS uncoupling, causes the production of O2− by eNOS, thereby leading to more oxidative stress produced by eNOS. In a study, both the selective COX-2 inhibitors and the non-selective NSAIDs produced oxidative stress, with greater effects seen with non-selective NSAIDs use. This could fit with the hypothesis concerning the prostacyclin/thromboxane imbalance. That is, although the non-selective NSAIDs produce more oxidative stress, they prevent platelet aggregation, whereas the COX-2 inhibitors reduce prostacyclin production, and, hence, the cardiovascular risk for the non-selective NSAIDs is not higher than for the coxibs. Among other hypotheses are increased blood pressure, decreased production of epi-lipoxins (which have anti-inflammatory effects), and inhibition of vascular remodeling when using selective COX-2 inhibitors. See also Arachidonic acid Cyclooxygenase Cyclooxygenase 1 Cyclooxygenase 2 NSAID COX-2 selective inhibitor References Cyclooxygenase 2 Inhibitors, Discovery And Development Of COX-2 inhibitors
Discovery and development of cyclooxygenase 2 inhibitors
Chemistry,Biology
5,243
38,701,470
https://en.wikipedia.org/wiki/Direct%20development
Direct development is a concept in biology. It refers to forms of growth to adulthood that do not involve metamorphosis. An animal undergoes direct development if the immature organism resembles a small adult rather than having a distinct larval form. A frog that hatches out of its egg as a small frog undergoes direct development. A frog that hatches out of its egg as a tadpole does not. Direct development is the opposite of complete metamorphosis. An animal undergoes complete metamorphosis if it becomes a non-moving thing, for example a pupa in a cocoon, between its larval and adult stages. Examples Most frogs in the genus Callulina hatch out of their eggs as froglets. Springtails and mayflies, called ametabolous insects, undergo direct development. References Developmental biology Animal anatomy
Direct development
Biology
169
22,665,481
https://en.wikipedia.org/wiki/Clifford%20semigroup
A Clifford semigroup (sometimes also called "inverse Clifford semigroup") is a completely regular inverse semigroup. It is an inverse semigroup with . Examples of Clifford semigroups are groups and commutative inverse semigroups. In a Clifford semigroup, . References Algebraic structures Semigroup theory
Clifford semigroup
Mathematics
63
78,434,850
https://en.wikipedia.org/wiki/Epaminurad
Epaminurad is an investigational new drug being developed by JW Pharmaceutical for the treatment of gout and hyperuricemia. It is a urate-lowering agent that selectively inhibits the human uric acid transporter 1 (hURAT1), promoting urate excretion. As of 2024, epaminurad is undergoing Phase 3 clinical trials to evaluate its efficacy and safety compared to febuxostat in gout patients across multiple Asian countries. References Benzamides Bromobenzene derivatives Oxazines Phenols Pyridines
Epaminurad
Chemistry
120
5,176,764
https://en.wikipedia.org/wiki/Hardness%20comparison
A variety of hardness-testing methods are available, including the Vickers, Brinell, Rockwell, Meyer and Leeb tests. Although it is impossible in many cases to give an exact conversion, it is possible to give an approximate material-specific comparison table for steels. Hardness comparison table References Further reading ISO 18265: "Metallic materials — Conversion of hardness values" (2013) ASTM E140-12B(2019)e1: "Standard Hardness Conversion Tables for Metals Relationship Among Brinell Hardness, Vickers Hardness, Rockwell Hardness, Superficial Hardness, Knoop Hardness, Scleroscope Hardness, and Leeb Hardness" (2019) External links Hardness Conversion Table – Brinell, Rockwell,Vickers – Various steels . (Archived) (archived November 11, 2011) Rockwell to Brinell conversion chart (Brinell, Rockwell A,B,C) Struers hardness conversion table (Vickers, Brinell, Rockwell B,C,D) Scientific comparisons
Hardness comparison
Materials_science
203
407,814
https://en.wikipedia.org/wiki/Hygiene%20hypothesis
In medicine, the hygiene hypothesis states that early childhood exposure to particular microorganisms (such as the gut flora and helminth parasites) protects against allergies by properly tuning the immune system. In particular, a lack of such exposure is thought to lead to poor immune tolerance. The time period for exposure begins before birth and ends at school age. While early versions of the hypothesis referred to microorganism exposure in general, later versions apply to a specific set of microbes that have co-evolved with humans. The updates have been given various names, including the microbiome depletion hypothesis, the microflora hypothesis, and the "old friends" hypothesis. There is a significant amount of evidence supporting the idea that lack of exposure to these microbes is linked to allergies or other conditions, although it is still rejected by many scientists. The term "hygiene hypothesis" has been described as a misnomer because people incorrectly interpret it as referring to their own cleanliness. Having worse personal hygiene, such as not washing hands before eating, only increases the risk of infection without affecting the risk of allergies or immune disorders. Hygiene is essential for protecting vulnerable populations such as the elderly from infections, preventing the spread of antibiotic resistance, and combating emerging infectious diseases such as Ebola. The hygiene hypothesis does not suggest that having more infections during childhood would be an overall benefit. Overview The idea of a link between parasite infection and immune disorders was first suggested in 1968 before the advent of large scale DNA sequencing techniques. The original formulation of the hygiene hypothesis dates from 1989, when David Strachan proposed that lower incidence of infection in early childhood could be an explanation for the rise in allergic diseases such as asthma and hay fever during the 20th century. The hygiene hypothesis has also been expanded beyond allergies, and is also studied in the context of a broader range of conditions affected by the immune system, particularly inflammatory diseases. These include type 1 diabetes, multiple sclerosis, and also some types of depression and cancer. For example, the global distribution of multiple sclerosis is negatively correlated with that of the helminth Trichuris trichiura and its incidence is negatively correlated with Helicobacter pylori infection. Strachan's original hypothesis could not explain how various allergic conditions spiked or increased in prevalence at different times, such as why respiratory allergies began to increase much earlier than food allergies, which did not become more common until near the end of the 20th century. In 2003, Graham Rook proposed the "old friends" hypothesis which has been described as a more rational explanation for the link between microbial exposure and inflammatory disorders. The hypothesis states that the vital microbial exposures are not colds, influenza, measles and other common childhood infections which have evolved relatively recently over the last 10,000 years, but rather the microbes already present during mammalian and human evolution, that could persist in small hunter-gatherer groups as microbiota, tolerated latent infections, or carrier states. He proposed that coevolution with these species has resulted in their gaining a role in immune system development. Strachan's original formulation of the hygiene hypothesis also centred around the idea that smaller families provided insufficient microbial exposure partly because of less person-to-person spread of infections, but also because of "improved household amenities and higher standards of personal cleanliness". It seems likely that this was the reason he named it the "hygiene hypothesis". Although the "hygiene revolution" of the nineteenth and twentieth centuries may have been a major factor, it now seems more likely that, while public health measures such as sanitation, potable water and garbage collection were instrumental in reducing our exposure to cholera, typhoid and so on, they also deprived people of their exposure to the "old friends" that occupy the same environmental habitats. The rise of autoimmune diseases and acute lymphoblastic leukemia in young people in the developed world was linked to the hygiene hypothesis. Autism may be associated with changes in the gut microbiome and early infections. The risk of chronic inflammatory diseases also depends on factors such as diet, pollution, physical activity, obesity, socio-economic factors, and stress. Genetic predisposition is also a factor. History Since allergies and other chronic inflammatory diseases are largely diseases of the last 100 years or so, the "hygiene revolution" of the last 200 years came under scrutiny as a possible cause. During the 1800s, radical improvements to sanitation and water quality occurred in Europe and North America. The introduction of toilets and sewer systems and the cleanup of city streets, and cleaner food were part of this program. This in turn led to a rapid decline in infectious diseases, particularly during the period 1900–1950, through reduced exposure to infectious agents. Although the idea that exposure to certain infections may decrease the risk of allergy is not new, Strachan was one of the first to formally propose it, in an article published in the British Medical Journal in 1989. This article proposed to explain the observation that hay fever and eczema, both allergic diseases, were less common in children from larger families, which were presumably exposed to more infectious agents through their siblings, than in children from families with only one child. The increased occurrence of allergies had previously been thought to be a result of increasing pollution. The hypothesis was extensively investigated by immunologists and epidemiologists and has become an important theoretical framework for the study of chronic inflammatory disorders. The "old friends hypothesis" proposed in 2003 may offer a better explanation for the link between microbial exposure and inflammatory diseases. This hypothesis argues that the vital exposures are not common cold and other recently evolved infections, which are no older than 10,000 years, but rather microbes already present in hunter-gatherer times when the human immune system was evolving. Conventional childhood infections are mostly "crowd infections" that kill or immunise and thus cannot persist in isolated hunter-gatherer groups. Crowd infections started to appear after the neolithic agricultural revolution, when human populations increased in size and proximity. The microbes that co-evolved with mammalian immune systems are much more ancient. According to this hypothesis, humans became so dependent on them that their immune systems can neither develop nor function properly without them. Rook proposed that these microbes most likely include: Ambient species that exist in the same environments as humans Species that inhabit human skin, gut and respiratory tract, and that of the animals we live with Organisms such as viruses and helminths (worms) that establish chronic infections or carrier states that humans can tolerate and so could co-evolve a specific immunoregulatory relationship with the immune system. The modified hypothesis later expanded to include exposure to symbiotic bacteria and parasites. "Evolution turns the inevitable into a necessity." This means that the majority of mammalian evolution took place in mud and rotting vegetation and more than 90 percent of human evolution took place in isolated hunter-gatherer communities and farming communities. Therefore, the human immune systems have evolved to anticipate certain types of microbial input, making the inevitable exposure into a necessity. The organisms that are implicated in the hygiene hypothesis are not proven to cause the disease prevalence, however there are sufficient data on lactobacilli, saprophytic environment mycobacteria, and helminths and their association. These bacteria and parasites have commonly been found in vegetation, mud, and water throughout evolution. Multiple possible mechanisms have been proposed for how the 'Old Friends' microorganisms prevent autoimmune diseases and asthma. They include: Reciprocal inhibition between immune responses directed against distinct antigens of the Old Friends microbes which elicit stronger immune responses than the weaker autoantigens and allergens of autoimmune disease and allergy respectively. Competition for cytokines, MHC receptors and growth factors needed by the immune system to mount an immune response. Immunoregulatory interactions with host TLRs. The "microbial diversity" hypothesis, proposed by Paolo Matricardi and developed by von Hertzen, holds that diversity of microbes in the gut and other sites is a key factor for priming the immune system, rather than stable colonization with a particular species. Exposure to diverse organisms in early development builds a "database" that allows the immune system to identify harmful agents and normalize once the danger is eliminated. For allergic disease, the most important times for exposure are: early in development; later during pregnancy; and the first few days or months of infancy. Exposure needs to be maintained over a significant period. This fits with evidence that delivery by Caesarean section may be associated with increased allergies, whilst breastfeeding can be protective. Evolution of the adaptive immune system Humans and the microbes they harbor have co-evolved for thousands of centuries; however, it is thought that the human species has gone through numerous phases in history characterized by different pathogen exposures. For instance, in very early human societies, small interaction between its members has given particular selection to a relatively limited group of pathogens that had high transmission rates. It is considered that the human immune system is likely subjected to a selective pressure from pathogens that are responsible for down regulating certain alleles and therefore phenotypes in humans. The thalassemia genes that are shaped by the Plasmodium species expressing the selection pressure might be a model for this theory but is not shown in-vivo. Recent comparative genomic studies have shown that immune response genes (protein coding and non-coding regulatory genes) have less evolutionary constraint, and are rather more frequently targeted by positive selection from pathogens that coevolve with the human subject. Of all the various types of pathogens known to cause disease in humans, helminths warrant special attention, because of their ability to modify the prevalence or severity of certain immune-related responses in human and mouse models. In fact recent research has shown that parasitic worms have served as a stronger selective pressure on select human genes encoding interleukins and interleukin receptors when compared to viral and bacterial pathogens. Helminths are thought to have been as old as the adaptive immune system, suggesting that they may have co-evolved, also implying that our immune system has been strongly focused on fighting off helminthic infections, insofar as to potentially interact with them early in infancy. The host-pathogen interaction is a very important relationship that serves to shape the immune system development early on in life. Biological basis The primary proposed mechanism of the hygiene hypothesis is an imbalance between the TH1 and TH2 subtypes of T helper cells. Insufficient activation of the TH1 arm would stimulate the cell defense of the immune system and lead to an overactive TH2 arm, stimulating the antibody-mediated immunity of the immune systems, which in turn led to allergic disease. However, this explanation cannot explain the rise in incidence (similar to the rise of allergic diseases) of several TH1-mediated autoimmune diseases, including inflammatory bowel disease, multiple sclerosis and type I diabetes. [Figure 1Bach] However, the North South Gradient seen in the prevalence of multiple sclerosis has been found to be inversely related to the global distribution of parasitic infection.[Figure 2Bach] Additionally, research has shown that MS patients infected with parasites displayed TH2 type immune responses as opposed to the proinflammatory TH1 immune phenotype seen in non-infected multiple sclerosis patients.[Fleming] Parasite infection has also been shown to improve inflammatory bowel disease and may act in a similar fashion as it does in multiple sclerosis.[Lee] Allergic conditions are caused by inappropriate immunological responses to harmless antigens driven by a TH2-mediated immune response, TH2 cells produce interleukin 4, interleukin 5, interleukin 6, interleukin 13 and predominantly stimulate immunoglobulin E production. Many bacteria and viruses elicit a TH1-mediated immune response, which down-regulates TH2 responses. TH1 immune responses are characterized by the secretion of pro-inflammatory cytokines such as interleukin 2, IFNγ, and TNFα. Factors that favor a predominantly TH1 phenotype include: older siblings, large family size, early day care attendance, infection (TB, measles, or hepatitis), rural living, or contact with animals. A TH2-dominated phenotype is associated with high antibiotic use, western lifestyle, urban environment, diet, and sensitivity to dust mites and cockroaches. TH1 and TH2 responses are reciprocally inhibitory, so when one is active, the other is suppressed. An alternative explanation is that the developing immune system must receive stimuli (from infectious agents, symbiotic bacteria, or parasites) to adequately develop regulatory T cells. Without that stimuli it becomes more susceptible to autoimmune diseases and allergic diseases, because of insufficiently repressed TH1 and TH2 responses, respectively. For example, all chronic inflammatory disorders show evidence of failed immunoregulation. Secondly, helminths, non-pathogenic ambient pseudocommensal bacteria or certain gut commensals and probiotics, drive immunoregulation. They block or treat models of all chronic inflammatory conditions. Evidence There is a significant amount of evidence supporting the idea that microbial exposure is linked to allergies or other conditions, although scientific disagreement still exists. Since hygiene is difficult to define or measure directly, surrogate markers are used such as socioeconomic status, income, and diet. Studies have shown that various immunological and autoimmune diseases are much less common in the developing world than the industrialized world and that immigrants to the industrialized world from the developing world increasingly develop immunological disorders in relation to the length of time since arrival in the industrialized world. This is true for asthma and other chronic inflammatory disorders. The increase in allergy rates is primarily attributed to diet and reduced microbiome diversity, although the mechanistic reasons are unclear. The use of antibiotics in the first year of life has been linked to asthma and other allergic diseases, and increased asthma rates are also associated with birth by Caesarean section. However, at least one study suggests that personal hygienic practices may be unrelated to the incidence of asthma. Antibiotic usage reduces the diversity of gut microbiota. Although several studies have shown associations between antibiotic use and later development of asthma or allergy, other studies suggest that the effect is due to more frequent antibiotic use in asthmatic children. Trends in vaccine use may also be relevant, but epidemiological studies provide no consistent support for a detrimental effect of vaccination/immunization on atopy rates. In support of the old friends hypothesis, the intestinal microbiome was found to differ between allergic and non-allergic Estonian and Swedish children (although this finding was not replicated in a larger cohort), and the biodiversity of the intestinal flora in patients with Crohn's disease was diminished. Limitations The hygiene hypothesis does not apply to all populations. For example, in the case of inflammatory bowel disease, it is primarily relevant when a person's level of affluence increases, either due to changes in society or by moving to a more affluent country, but not when affluence remains constant at a high level. The hygiene hypothesis has difficulty explaining why allergic diseases also occur in less affluent regions. Additionally, exposure to some microbial species actually increases future susceptibility to disease instead, as in the case of infection with rhinovirus (the main source of the common cold) which increases the risk of asthma. Treatment Current research suggests that manipulating the intestinal microbiota may be able to treat or prevent allergies and other immune-related conditions. Various approaches are under investigation. Probiotics (drinks or foods) have never been shown to reintroduce microbes to the gut. As yet, therapeutically relevant microbes have not been specifically identified. However, probiotic bacteria have been found to reduce allergic symptoms in some studies. Other approaches being researched include prebiotics, which promote the growth of gut flora, and synbiotics, the use of prebiotics and probiotics at the same time. Should these therapies become accepted, public policy implications include providing green spaces in urban areas or even providing access to agricultural environments for children. Helminthic therapy is the treatment of autoimmune diseases and immune disorders by means of deliberate infestation with a helminth larva or ova. Helminthic therapy emerged from the search for reasons why the incidence of immunological disorders and autoimmune diseases correlates with the level of industrial development. The exact relationship between helminths and allergies is unclear, in part because studies tend to use different definitions and outcomes, and because of the wide variety among both helminth species and the populations they infect. The infections induce a type 2 immune response, which likely evolved in mammals as a result of such infections; chronic helminth infection has been linked with a reduced sensitivity in peripheral T cells, and several studies have found deworming to lead to an increase in allergic sensitivity. However, in some cases helminths and other parasites are a cause of developing allergies instead. In addition, such infections are not themselves a treatment as they are a major disease burden and in fact they are one of the most important neglected diseases. The development of drugs that mimic the effects without causing disease is in progress. Public health The reduction of public confidence in hygiene has significant possible consequences for public health. Hygiene is essential for protecting vulnerable populations such as the elderly from infections, preventing the spread of antibiotic resistance, and for combating emerging infectious diseases such as SARS and Ebola. The misunderstanding of the term "hygiene hypothesis" has resulted in unwarranted opposition to vaccination as well as other important public health measures. It has been suggested that public awareness of the initial form of the hygiene hypothesis has led to an increased disregard for hygiene in the home. The effective communication of science to the public has been hindered by the presentation of the hygiene hypothesis and other health-related information in the media. Cleanliness No evidence supports the idea that reducing modern practices of cleanliness and hygiene would have any impact on rates of chronic inflammatory and allergic disorders, but a significant amount of evidence indicates that reducing hygiene would increase the risks of infectious diseases. The phrase "targeted hygiene" has been used in order to recognize the importance of hygiene in avoiding pathogens. If home and personal cleanliness contributes to reduced exposure to vital microbes, its role is likely to be small. The idea that homes can be made “sterile” through excessive cleanliness is implausible, and the evidence shows that after cleaning, microbes are quickly replaced by dust and air from outdoors, by shedding from the body and other living things, as well as from food. The key point may be that the microbial content of urban housing has altered, not because of home and personal hygiene habits, but because they are part of urban environments. Diet and lifestyle changes also affects the gut, skin and respiratory microbiota. At the same time that concerns about allergies and other chronic inflammatory diseases have been increasing, so also have concerns about infectious disease. Infectious diseases continue to exert a heavy health toll. Preventing pandemics and reducing antibiotic resistance are global priorities, and hygiene is a cornerstone of containing these threats. Infection risk management The International Scientific Forum on Home Hygiene has developed a risk management approach to reducing home infection risks. This approach uses microbiological and epidemiological evidence to identify the key routes of infection transmission in the home. These data indicate that the critical routes involve the hands, hand and food contact surfaces and cleaning utensils. Clothing and household linens involve somewhat lower risks. Surfaces that contact the body, such as baths and hand basins, can act as infection vehicles, as can surfaces associated with toilets. Airborne transmission can be important for some pathogens. A key aspect of this approach is that it maximises protection against pathogens and infection, but is more relaxed about visible cleanliness in order to sustain normal exposure to other human, animal and environmental microbes. See also Antibacterial soap Antifragility Diseases of affluence Germ theory of disease Helminthic therapy Hookworm Human microbiome Microbiomes of the built environment Vaginal seeding References Further reading Allergology Epidemiology Biological hypotheses Immunology theories
Hygiene hypothesis
Biology,Environmental_science
4,222
57,268,137
https://en.wikipedia.org/wiki/Women%20In%20Astronomy%20Nepal
Women In Astronomy Nepal (WIAN) was established on November 1, 2015 A.D. in order to provide a common platform for all the women interested in astronomy and in Nepal. It is a sub unit within Nepal Astronomical Society (NASO). WIAN is primarily concerned with young females pursuing their career in Science, Technology, Engineering and Mathematics (STEM). Programs Women in Outreach Women in Science Award (WiSA) Publication References External links Women in Astronomy Nepal's Facebook page Scientific organisations based in Nepal Women's organisations based in Nepal 2015 in outer space Astronomy in Nepal Science advocacy organizations Astronomy societies 2015 establishments in Nepal
Women In Astronomy Nepal
Astronomy
128
490,054
https://en.wikipedia.org/wiki/List%20of%20exponential%20topics
This is a list of exponential topics, by Wikipedia page. See also list of logarithm topics. Accelerating change Approximating natural exponents (log base e) Artin–Hasse exponential Bacterial growth Baker–Campbell–Hausdorff formula Cell growth Barometric formula Beer–Lambert law Characterizations of the exponential function Catenary Compound interest De Moivre's formula Derivative of the exponential map Doléans-Dade exponential Doubling time e-folding Elimination half-life Error exponent Euler's formula Euler's identity e (mathematical constant) Exponent Exponent bias Exponential (disambiguation) Exponential backoff Exponential decay Exponential dichotomy Exponential discounting Exponential diophantine equation Exponential dispersion model Exponential distribution Exponential error Exponential factorial Exponential family Exponential field Exponential formula Exponential function Exponential generating function Exponential-Golomb coding Exponential growth Exponential hierarchy Exponential integral Exponential integrator Exponential map (Lie theory) Exponential map (Riemannian geometry) Exponential map (discrete dynamical systems) Exponential notation Exponential object (category theory) Exponential polynomials—see also Touchard polynomials (combinatorics) Exponential response formula Exponential sheaf sequence Exponential smoothing Exponential stability Exponential sum Exponential time Sub-exponential time Exponential tree Exponential type Exponentially equivalent measures Exponentiating by squaring Exponentiation Fermat's Last Theorem Forgetting curve Gaussian function Gudermannian function Half-exponential function Half-life Hyperbolic function Inflation, inflation rate Interest Lambert W function Lifetime (physics) Limiting factor Lindemann–Weierstrass theorem List of integrals of exponential functions List of integrals of hyperbolic functions Lyapunov exponent Malthusian catastrophe Malthusian growth model Marshall–Olkin exponential distribution Matrix exponential Moore's law Nachbin's theorem Piano key frequencies p-adic exponential function Power law Proof that e is irrational Proof that e is transcendental Q-exponential Radioactive decay Rule of 70, Rule of 72 Scientific notation Six exponentials theorem Spontaneous emission Super-exponentiation Tetration Versor Weber–Fechner law Wilkie's theorem Zenzizenzizenzic Exponentials Exponential
List of exponential topics
Mathematics
439