source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Ethyl%20heptanoate
Ethyl heptanoate is the ester resulting from the condensation of heptanoic acid and ethanol. It is used in the flavor industry because of its odor that is similar to grape.
https://en.wikipedia.org/wiki/HAF%20604
HAF 604 is the certification for components for nuclear power plants in China that have been manufactured outside of China. Exporters of these safety-relevant components need this certification for importing the components into China. The responsible authority for the certification of these components for nuclear power plants is the National Nuclear Safety Administration. Regulations The regulations for the HAF 604 certification are the “Supervision and Management Regulations for Imported Civilian Nuclear Equipment” and determine safety requirements for these safety relevant components. Furthermore, it governs the administration for the certification applications from non-Chinese manufacturers regarding the construction, manufacturing as well as installation of safety-relevant components for nuclear power plants. The Regulation also defines the obligatory safety test requirements for the certification process. HAF 604 should be distinguished from HAF 601, which deals with the regulations for domestic (Chinese) Civil Nuclear Safety Equipment. By May 2013, 191 Chinese companies had already received HAF601 certification, whereas 217 foreign firms had received HAF604 certification. According to a 2013 publication by the US Department of Commerce - Commercial Service and consulting firm Nicobar Group, nearly two-thirds of the foreign firms receiving certifications by that date were American, French, or German firms. See also HAF601 NNSA National Nuclear Safety Administration
https://en.wikipedia.org/wiki/Cram%20%28game%29
Cram is a mathematical game played on a sheet of graph paper. It is the impartial version of Domineering and the only difference in the rules is that players may place their dominoes in either orientation, but it results in a very different game. It has been called by many names, including "plugg" by Geoffrey Mott-Smith, and "dots-and-pairs". Cram was popularized by Martin Gardner in Scientific American. Rules The game is played on a sheet of graph paper, with any set of designs traced out. It is most commonly played on rectangular board like a 6×6 square or a checkerboard, but it can also be played on an entirely irregular polygon or a cylindrical board. Two players have a collection of dominoes which they place on the grid in turn. A player can place a domino either horizontally or vertically. Contrary to the related game of Domineering, the possible moves are the same for the two players, and Cram is then an impartial game. As for all impartial games, there are two possible conventions for victory: in the normal game, the first player who cannot move loses, and on the contrary, in the misère version, the first player who cannot move wins. Symmetry play The winning strategy for normal Cram is simple for even-by-even boards and even-by-odd boards. In the even-by-even case, the second player wins by symmetry play. This means that for any move by Player 1, Player 2 has a corresponding symmetric move across the horizontal and vertical axes. In a sense, player 2 'mimics' the moves made by Player 1. If Player 2 follows this strategy, Player 2 will always make the last move, and thus win the game. In the even-by-odd case, the first player wins by similar symmetry play. Player 1 places their first domino in the center two squares on the grid. Player 2 then makes their move, but Player 1 can play symmetrically thereafter, thus ensuring a win for Player 1. Symmetry play is a useless strategy in the misère version, because in that case it would only ensure the pla
https://en.wikipedia.org/wiki/Chitika
Chitika, Inc. (pronounced CHIH-tih-ka) was a search-targeted advertising company. It was located in Westborough, Massachusetts, United States. The name Chitika means "in a snap" in Telugu language. On April 17, 2019, Chitika announced that they were shutting down their business. History In 2003, co-founders Venkat Kolluri and Alden DoRosario started Chitika after leaving their jobs at search engine-based company "Lycos". Since launching its online advertising service in 2004, Chitika has added a Mobile advertising Division as well as a Real-Time Bidding Division. In 2015, Chitika's founders announced Cidewalk, their local mobile ad platform, would spin off into a separate business unit with $4 Million in seed funding and new office space in Southborough, MA. On April 17, 2019, Chitika wrote to publishers advising them of their immediate cessation of business. The email advised publishers to remove all Chitika code from their website. The email further advised publishers that they would not be paid for impressions or clicks occurring since March 1, 2019. Partnerships In 2009, Chitika began a partnership with the b5media Network. In 2010, Yahoo! closed their AdSense competitor Yahoo Publisher Network Online (YPNO) and recommended publishers migrate to Chitika as a replacement. In 2013, Chitika announced a multi-year extension of its partnership with Yahoo!. The agreement includes off–network search syndication, monetization of Yahoo! owned and operated properties, and mobile ad serving and monetization. Awards and recognition 2007 Inc. top free services for generating revenue on your website: Best For Promoting Ancillary Products 2008 AlwaysOn: Top 100 fastest-growing companies in the Northeast Red Herring: Leading private technology companies in North America MITX 2008 Technology Awards: Finalist, Marketing/Customer Relationship Technologies Category 2009 Red Herring: Top 100 Global 2008 Winner 2010 DPAC Award Finalist for Best Mobile Advertising N
https://en.wikipedia.org/wiki/Fusion%20mechanism
A fusion mechanism is any mechanism by which cell fusion or virus–cell fusion takes place, as well as the machinery that facilitates these processes. Cell fusion is the formation of a hybrid cell from two separate cells. There are three major actions taken in both virus–cell fusion and cell–cell fusion: the dehydration of polar head groups, the promotion of a hemifusion stalk, and the opening and expansion of pores between fusing cells. Virus–cell fusions occur during infections of several viruses that are health concerns relevant today. Some of these include HIV, Ebola, and influenza. For example, HIV infects by fusing with the membranes of immune system cells. In order for HIV to fuse with a cell, it must be able to bind to the receptors CD4, CCR5, and CXCR4. Cell fusion also occurs in a multitude of mammalian cells including gametes and myoblasts. Viral mechanisms Fusogens Proteins that allow viral or cell membranes to overcome barriers to fusion are called fusogens. Fusogens involved in virus-to-cell fusion mechanisms were the first of these proteins to be discovered. Viral fusion proteins are necessary for membrane fusion to take place. There is evidence that ancestral species of mammals may have incorporated these same proteins into their own cells as a result of infection. For this reason, similar mechanisms and machinery are utilized in cell–cell fusion. In response to certain stimuli, such as low pH or binding to cellular receptors, these fusogens will change conformation. The conformation change allows the exposure of hydrophobic regions of the fusogens that would normally be hidden internally due to energetically unfavorable interactions with the cytosol or extracellular fluid. These hydrophobic regions are known as fusion peptides or fusion loops, and they are responsible for causing localized membrane instability and fusion. Scientists have found the following four classes of fusogens to be involved with virus–cell or cell–cell fusions. Class I
https://en.wikipedia.org/wiki/Microoxygenation
Micro-oxygenation is a process used in winemaking to introduce oxygen into wine in a controlled manner. Developed in 1991 by Patrick DuCournau, working with the exceptionally tannic grape Tannat in Madiran, the process gained usage in modern winemaking following the 1996 authorization by the European Commission. Today, the technique is widely employed in Bordeaux, as well as at least 11 different countries, including the United States and Chile. Process The process of micro-oxygenation involves a large two-chamber device with valves interconnected to a tank of oxygen. In the first chamber, the oxygen is calibrated to match the volume of the wine. In the second chamber, the oxygen is injected into the wine through a porous ceramic stone located at the bottom of the chamber. The dosage is controlled and can range anywhere from .75 to 3 cubic centimetres per liter of wine. The process normally occurs in multiple treatments that can last anywhere from one or two treatments during the early stages of fermentation (to help avoid stuck fermentation) to a more prolonged treatment during the maturation period that can last four to eight months. Micro-oxygenation affects colour, aromatic bouquet, mouth-feel and phenolic content. Carboxypyranoanthocyanidins can be considered markers of microoxygenation techniques. Benefits Exposure to oxygen during production may improve wine, but the exposure must be limited: too much oxygen can lead to oxidation while too little can lead to reduction, either one leading to its associated wine faults. In barrel aging, the natural properties of the wood allow for gentle aeration of the wine to occur over a prolonged period. This aids in polymerization of tannin into larger molecules, which could fall out of solution, not promoting protein precipitation in the mouth and thus improving mouth astringency. The process of micro-oxygenation aims to mimic the effects of slow barrel maturation in a shorter period or for lower cost. It also enable
https://en.wikipedia.org/wiki/Camassa%E2%80%93Holm%20equation
In fluid dynamics, the Camassa–Holm equation is the integrable, dimensionless and non-linear partial differential equation The equation was introduced by Roberto Camassa and Darryl Holm as a bi-Hamiltonian model for waves in shallow water, and in this context the parameter κ is positive and the solitary wave solutions are smooth solitons. In the special case that κ is equal to zero, the Camassa–Holm equation has peakon solutions: solitons with a sharp peak, so with a discontinuity at the peak in the wave slope. Relation to waves in shallow water The Camassa–Holm equation can be written as the system of equations: with p the (dimensionless) pressure or surface elevation. This shows that the Camassa–Holm equation is a model for shallow water waves with non-hydrostatic pressure and a water layer on a horizontal bed. The linear dispersion characteristics of the Camassa–Holm equation are: with ω the angular frequency and k the wavenumber. Not surprisingly, this is of similar form as the one for the Korteweg–de Vries equation, provided κ is non-zero. For κ equal to zero, the Camassa–Holm equation has no frequency dispersion — moreover, the linear phase speed is zero for this case. As a result, κ is the phase speed for the long-wave limit of k approaching zero, and the Camassa–Holm equation is (if κ is non-zero) a model for one-directional wave propagation like the Korteweg–de Vries equation. Hamiltonian structure Introducing the momentum m as then two compatible Hamiltonian descriptions of the Camassa–Holm equation are: Integrability The Camassa–Holm equation is an integrable system. Integrability means that there is a change of variables (action-angle variables) such that the evolution equation in the new variables is equivalent to a linear flow at constant speed. This change of variables is achieved by studying an associated isospectral/scattering problem, and is reminiscent of the fact that integrable classical Hamiltonian systems are equivalent to linear f
https://en.wikipedia.org/wiki/Mosaiculture
Mosaiculture is the horticultural art of creating giant topiary-like sculptures using thousands of annual bedding plants to carpet steel armature forms. It is different from classical topiary. Mosaïcultures Internationales® is the name of an international competition governed by the International Mosaiculture Committee, which was formed in 2000, the first year the event was staged. Mosaïcultures Internationales® is an internationally protected name and patent. In 2013 an international competition in Mosaicultures was held in Montreal, Canada. As part of Canada's 150th anniversary celebrations in 2017, a large exhibition of Mosaiculture was held at Jacques Cartier Park in Gatineau, Quebec. MOSAICANADA150 featured sculptures representing Canada's 10 provinces and 3 territories, and indigenous peoples. In 2018, many of the sculptures will be moved to their home province to be displayed. Founder Lise Cormier, head of the City of Montréal's Parks, Gardens and Green Spaces Department and the Botanical Garden, first got the idea to launch an international mosaiculture competition in 1998. History of exhibits 2000 – World premiere of Mosaïcultures Internationales® in Montréal Theme: The Planet is a Mosaic Participants: 35 cities and organizations from 14 countries Visitors: 730,000 (110 days) 2003 – Mosaïcultures Internationales Montréal 2003 Theme: Myths and Legends of the World Participants: 51 cities and organizations from 32 countries Visitors: 755,000 (110 days) 2006 – Mosaïcultures Internationales Shanghai 2006 Theme: The Earth, Our Village Participants: 55 cities and organizations from 15 countries Visitors: Over 1,000,000 (76 days) 2009 – Mosaïcultures Internationales Hamamatsu 2009 Under the honorary presidency of His Imperial Highness Prince Akishino Theme: The Symphony of People and Nature Participants: 97 cities and organizations from 25 countries Visitors: 865,000 (66 days) 2013 – Mosaïcultures Internationales de Montréal 2013 Theme: Lan
https://en.wikipedia.org/wiki/Mac%20operating%20systems
Two major families of Mac operating systems were developed by Apple Inc. In 1984, Apple debuted the operating system that is now known as the "Classic" Mac OS with its release of the original Macintosh System Software. The system, rebranded "Mac OS" in 1997, was pre-installed on every Macintosh until 2002 and offered on Macintosh clones for a short time in the 1990s. Noted for its ease of use, it was also criticized for its lack of modern technologies compared to its competitors. The current Mac operating system is macOS, originally named "Mac OS X" until 2012 and then "OS X" until 2016. Developed between 1997 and 2001 after Apple's purchase of NeXT, Mac OS X brought an entirely new architecture based on NeXTSTEP, a Unix system, that eliminated many of the technical challenges that the classic Mac OS faced. The current macOS is pre-installed with every Mac and receives a major update annually. It is the basis of Apple's current system software for its other devices – iOS, iPadOS, watchOS, and tvOS. Prior to the introduction of Mac OS X, Apple experimented with several other concepts, releasing different products designed to bring the Macintosh interface or applications to Unix-like systems or vice versa, A/UX, MAE, and MkLinux. Apple's effort to expand upon and develop a replacement for its classic Mac OS in the 1990s led to a few cancelled projects, code named Star Trek, Taligent, and Copland. Although they have different architectures, Mac operating systems share a common set of GUI principles, including a menu bar across the top of the screen; the Finder shell, featuring a desktop metaphor that represents files and applications using icons and relates concepts like directories and file deletion to real-world objects like folders and a trash can; and overlapping windows for multitasking. Classic Mac OS The "classic" Mac OS is the original Macintosh operating system that was introduced in 1984 alongside the first Macintosh and remained in primary use on Macs
https://en.wikipedia.org/wiki/House%20call
A house call is medical consultation performed by a doctor or other healthcare professionals visiting the home of a patient or client, instead of the patient visiting the doctor's clinic or hospital. In some locations, families used to pay dues to a particular practice to underwrite house calls. History In the early 1930s, house calls by doctors were 40% of doctor-patient meetings. By 1980, it was only 0.6%. Reasons include increased specialization and technology. In the 1990s, team home care, including physician visits, was a small but growing field in health care, for frail older people with chronic illnesses. The reasons for fewer house calls include concerns about providing low-overhead care in the home, time inefficiency, and inconvenience. Yet, there are more and more doctors who like the idea of no office overhead. Also, it can provide safe access to care by people who are ill. Today, house calls may be making a revival among the wealthy through concierge telemedicine and mobile apps. Canada In 2012 as part of its Action Plan for Healthcare the province of Ontario actively expanded funding for access to house calls with its primary focus being on seniors and those with physical limitations making it difficult for travel outside the home. Residents of Ontario with valid Ontario Health Insurance Plan cards are able to take advantage of the house call system, and arrange for appointments with physicians at their home. Currently, this service is only available in Toronto. United States In the United States, leadership such as George Washington were known to receive house calls. Upon his deathbed in 1799, President Washington received a house call prior to his passing. Presently, the United States' leadership retain a Physician to the President on staff. Midwifery The US rate of out-of-hospital birth has remained steady at 1% of all births since 1989, with data from 2007 showing that 27.3% of the home births since 1989 took place in a free-standing birt
https://en.wikipedia.org/wiki/Reid%20vapor%20pressure
Reid vapor pressure (RVP) is a common measure of the volatility of gasoline and other petroleum products. It is defined as the absolute vapor pressure exerted by the vapor of the liquid and any dissolved gases/moisture at 37.8 °C (100 °F) as determined by the test method ASTM-D-323, which was first developed in 1930 and has been revised several times (the latest version is ASTM D323-15a). The test method measures the vapor pressure of gasoline, volatile crude oil, jet fuels, naphtha, and other volatile petroleum products but is not applicable for liquefied petroleum gases. ASTM D323-15a requires that the sample be chilled to 0 to 1 degrees Celsius and then poured into the apparatus; for any material that solidifies at this temperature, this step cannot be performed. RVP is commonly reported in kilopascals (kPa) or pounds per square inch (psi) and represents volatization at atmospheric pressure because ASTM-D-323 measures the gauge pressure of the sample in a non-evacuated chamber. The matter of vapor pressure is important relating to the function and operation of gasoline-powered, especially carbureted, vehicles and is also important for many other reasons. High levels of vaporization are desirable for winter starting and operation and lower levels are desirable in avoiding vapor lock during summer heat. Fuel cannot be pumped when there is vapor in the fuel line (summer) and winter starting will be more difficult when liquid gasoline in the combustion chambers has not vaporized. Thus, oil refineries manipulate the Reid vapor pressure seasonally specifically to maintain gasoline engine reliability. The Reid vapor pressure (RVP) can differ substantially from the true vapor pressure (TVP) of a liquid mixture, since (1) RVP is the vapor pressure measured at 37.8 °C (100 °F) and the TVP is a function of the temperature; (2) RVP is defined as being measured at a vapor-to-liquid ratio of 4:1, whereas the TVP of mixtures can depend on the actual vapor-to-liquid r
https://en.wikipedia.org/wiki/Crackpot%20index
The Crackpot Index is a number that rates scientific claims or the individuals that make them, in conjunction with a method for computing that number. It was proposed by John C. Baez in 1992, and updated in 1998. While the index was created for its humorous value, the general concepts can be applied in other fields like risk management. Baez's crackpot index The method was initially proposed semi-seriously by mathematical physicist John C. Baez in 1992, and then revised in 1998. The index used responses to a list of 37 questions, each positive response contributing a point value ranging from 1 to 50; the computation is initialized with a value of −5. An earlier version only had 17 questions with point values for each ranging from 1 to 40. The New Scientist published a claim in 1992 that the creation of the index was "prompted by an especially striking outburst from a retired mathematician insisting that TIME has INERTIA". Baez later confirmed in a 1993 letter to New Scientist that he created the index. The index was later published in Skeptic magazine, with an editor's note saying "we know that outsiders to a field can make important contributions and even lead revolutions. But the chances of that happening are rather slim, especially when they meet many of the [Crackpot index] criteria". Though the index was not proposed as a serious method, it nevertheless has become popular in Internet discussions of whether a claim or an individual is cranky, particularly in physics (e.g., at the Usenet newsgroup sci.physics), or in mathematics. Chris Caldwell's Prime Pages has a version adapted to prime number research which is a field with many famous unsolved problems that are easy to understand for amateur mathematicians. Gruenberger's measure for crackpots An earlier crackpot index is Fred J. Gruenberger's "A Measure for Crackpots" published in December 1962 by the RAND Corporation. See also Crank (person) List of topics characterized as pseudoscience Pseudophy
https://en.wikipedia.org/wiki/Presentation%20%28medical%29
In medicine, a presentation is the appearance in a patient of illness or disease—or signs or symptoms thereof—before a medical professional. In practice, one usually speaks of a patient as presenting with this or that. Examples include: "...Many depressed patients present with medical rather than psychiatric complaints, and those who present with medical complaints are twice as likely to be misdiagnosed as those who present with psychiatric complaints." "...In contrast, poisonings from heavy metal can be subtle and present with a slowly progressive course." "...Some patients present with small unobstructed kidneys, when the diagnosis is easy to miss." "...A total of 7,870,266 patients presented to a public hospital ED from 1 July 2017 to 30 June 2018." See also Presentation (obstetrics)
https://en.wikipedia.org/wiki/Birla%20Industrial%20%26%20Technological%20Museum
Birla Industrial & Technological Museum (BITM) is a science museum in Kolkata, West Bengal, India. It is a unit under National Council of Science Museums (NCSM), Ministry of Culture, Government of India. Under the governmental jurisdiction of the Council of Scientific & Industrial Research (CSIR), BITM is commonly recognized as the precursor of India's science museum concept. History Until 1919, the Birla Industrial & Technological Museum site, established at 19A Gurusaday Road, was initially referred to as 18 Ballygunge Store Road. In 1898 the Tagores purchased it from Mirza Abdul Karim, citing sources. For most of her early life, Meera Devi, the fourth among Rabindranath Tagore's five children, experienced childhood in this residence. G.D. Birla purchased the land from Surendranath Tagore in 1919, and it became recognized as Birla Park. Dr. Bidhan Chandra Roy, the former Chief Minister of West Bengal felt motivated to establish a similar establishment in India for citizen participation in science and technology following a tour of the Deutsches Museum in Munich. Pandit Jawaharlal Nehru, India's prime minister and entrepreneur Shri Ghanshyam Das Birla, supported and encouraged his concept and endeavors in this area. Birla Park, his magnificent mansion and surrounding block of land in Calcutta's affluent Ballygunge neighborhood, was bequeathed to the CSIR to establish an Industrial and Technological Museum. In 1956, Pandit Nehru got this wonderful donation from Shri G. D Birla. The voyage from the government of India taking over Birla Park in 1956 to the inauguration of the Museum in 1959 was spectacular and demanding. The development of India's first scientific Museum under the aegis of the central government was the product of meticulous research and diligent work by the Museum's steering group, which was led by Dr. B. C Roy himself and included several notable scientists, educators, and entrepreneurs. Prof. Humayun Kabir, the then union minister for scientific
https://en.wikipedia.org/wiki/Water%20integrator
The Water Integrator ( Gidravlicheskiy integrator) was an early analog computer built in the Soviet Union in 1936 by Vladimir Sergeevich Lukyanov. It functioned by careful manipulation of water through a room full of interconnected pipes and pumps. The water level in various chambers (with precision to fractions of a millimeter) represented stored numbers, and the rate of flow between them represented mathematical operations. This machine was capable of solving inhomogeneous differential equations. The first versions of Lukyanov's integrators were rather experimental, made of tin and glass tubes, and each integrator could be used to solve only one problem. In the 1930s it was the only computer in the Soviet Union for solving partial differential equations. In 1941, Lukyanov created a hydraulic integrator of modular design, which made it possible to assemble a machine for solving various problems. Two-dimensional and three-dimensional hydraulic integrators were designed. In 1949–1955, an integrator in the form of standard unified units was developed at the NIISCHETMASH Institute. In 1955, the Ryazan plant of calculating and analytical machines began the serial production of integrators with the factory brand name “IGL” (russian: Интегратор Гидравлический Лукьянова - integrator of the Lukyanov hydraulic system). Integrators were widely distributed, delivered to Czechoslovakia, Poland, Bulgaria and China. A water integrator was used in the design of the Karakum Canal in the 1940s, and the construction of the Baikal–Amur Mainline in the 1970s. Water analog computers were used in the Soviet Union until the 1980s for large-scale modelling. They were used in geology, mine construction, metallurgy, rocket production and other fields. Currently, two hydraulic integrators are kept in the Polytechnic Museum in Moscow. See also History of computing hardware MONIAC Computer Fluidics
https://en.wikipedia.org/wiki/Hut%203
Hut 3 was a section of the Government Code and Cypher School (GC&CS) at Bletchley Park during World War II. It retained the name for its functions when it moved into Block D. It produced military intelligence codenamed ULTRA from the decrypts of Enigma, Tunny and multiple other sources. Hut 3 thus became an intelligence agency in its own right, providing information of great strategic value, but rarely of operational use. Group Captain Eric Malcolm Jones led this activity from 1943 and after the war became deputy director, and in 1952 director of GCHQ. In July 1945, General Dwight D. Eisenhower Supreme Commander of Allied forces wrote to Sir Stewart Menzies, Chief of the British Secret Intelligence Service (MI6) saying inter alia: Development The "German Army and Air Force Enigma Reporting Section" was set up in January 1940. That name, however, was soon dropped in favour of "Hut 3" as a description both of the location and of the functions and this was retained when, in February 1943 it moved into Block D. These became very much more than just the translation, interpretation and distribution of German Army and Air Force Enigma messages deciphered by Hut 6. By the time of D-Day in June 1944 Hut 3 was synthesising a torrent of signals intelligence ("SIGINT") data from multiple sources and producing an outgoing flood of useful intelligence. David Kenyon, Research Historian at Bletchley Park has been able to access a number of unpublished sources, in particular "The History of Hut Three", a GCHQ document in The National Archives (HW3/119) for his 2019 book "Bletchley Park and D-Day: The Untold Story of How the Battle for Normandy Was Won". Initially, there were serious personal frictions between the four main people. They were the original leader, Lieutenant-Commander Malcolm Saunders, Squadron Leader Robert Humphreys (senior liaison officer with the Air Force), Captain Curtis (senior liaison officer with the War Office, who knew no German), and Cambridge academic F.
https://en.wikipedia.org/wiki/Secondary%20sensory%20endings
Within the sensory nervous system, Secondary sensory endings of the muscle spindle are composed of type II sensory fibers that terminate on nuclear chain fibers and static nuclear bag fibers, but not dynamic nuclear bag fibers. Whereas primary endings respond mostly to rate of change, secondary endings respond mostly to amount of stretch.
https://en.wikipedia.org/wiki/Boolean%20algebra%20%28structure%29
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of both set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets, or its elements can be viewed as generalized truth values. It is also a special case of a De Morgan algebra and a Kleene algebra (with involution). Every Boolean algebra gives rise to a Boolean ring, and vice versa, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference (not disjunction ∨). However, the theory of Boolean rings has an inherent asymmetry between the two operators, while the axioms and theorems of Boolean algebra express the symmetry of the theory described by the duality principle. History The term "Boolean algebra" honors George Boole (1815–1864), a self-educated English mathematician. He introduced the algebraic system initially in a small pamphlet, The Mathematical Analysis of Logic, published in 1847 in response to an ongoing public controversy between Augustus De Morgan and William Hamilton, and later as a more substantial book, The Laws of Thought, published in 1854. Boole's formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons and Charles Sanders Peirce. The first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whitehead's 1898 Universal Algebra. Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall S
https://en.wikipedia.org/wiki/European%20List%20of%20Notified%20Chemical%20Substances
The European List of Notified Chemical Substances (ELINCS) provides an EINECS number. The system was used by the European Union to identify commercially available chemical substances. Since 1 June 2007 EU Members, Liechtenstein, Iceland and Norway apply the REACH protocol (Registration, Evaluation, Authorisation of Chemicals).
https://en.wikipedia.org/wiki/Burns%20Archive
The Burns Archive is the world’s largest private collection of early medical photography and historic photographs, housing over one million photographs. While it primarily contains images related to medical practises, it is also famous for photographs depicting 'the darker side of life'. Other themes prevalent throughout the collection involve death, crime, racism, and war. About Known as one of the world’s most important repositories of early medical history, images of “the darker side of life” make up the collection: anatomical and medical oddities, memorial and post-mortem photography, and original historic photographs depicting death, disease, disaster, crime, racism, revolution, riots, and war. The collection traces the history of photography, from its beginnings in 1839 to the 1950s, and includes hundreds of thousands of Daguerreotypes, ambrotypes, tintypes, carte de visites, and hand-colored photographs. The Burns Archive actively acquires, donates, researches, lectures, exhibits, consults, and shares its rare and unusual photographs and expertise worldwide. The Archive’s medical collection houses photographs in the categories of pioneers and innovators, operative scenes, therapy and treatments, disease and pathology, medical specialties, interesting cases and medical curiosities, hospitals and wards, nursing, alternative practitioners, anatomy and education, laboratories and doctors’ offices, medicine and war, and more. Many of these collected pictures allowed the medical community of the era to share knowledge and define pathology. The Archive's historical collection ranges from categories of death and memorial, war and conflict, and crime and punishment, to occupations and industry, social and cultural history, photographic history, Judaica, Egyptology, ethnology, folk, and African American history. The collection has been featured in over 100 exhibitions at museums and galleries worldwide, including New York’s Metropolitan Museum of Art and Paris' Mus
https://en.wikipedia.org/wiki/Atmospheric%20model
In atmospheric science, an atmospheric model is a mathematical model constructed around the full set of primitive, dynamical equations which govern atmospheric motions. It can supplement these equations with parameterizations for turbulent diffusion, radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, the kinematic effects of terrain, and convection. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict microscale phenomena such as tornadoes and boundary layer eddies, sub-microscale turbulent flow over buildings, as well as synoptic and global flows. The horizontal domain of a model is either global, covering the entire Earth, or regional (limited-area), covering only part of the Earth. The different types of models run are thermotropic, barotropic, hydrostatic, and nonhydrostatic. Some of the model types make assumptions about the atmosphere which lengthens the time steps used and increases computational speed. Forecasts are computed using mathematical equations for the physics and dynamics of the atmosphere. These equations are nonlinear and are impossible to solve exactly. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods. Global models often use spectral methods for the horizontal dimensions and finite-difference methods for the vertical dimension, while regional models usually use finite-difference methods in all three dimensions. For specific locations, model output statistics use climate information, output from numerical weather prediction, and current surface weather observations to develop statistical relationships which account for model bias and resolution issues. Types The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated usi
https://en.wikipedia.org/wiki/Maximum%20entropy%20spectral%20estimation
Maximum entropy spectral estimation is a method of spectral density estimation. The goal is to improve the spectral quality based on the principle of maximum entropy. The method is based on choosing the spectrum which corresponds to the most random or the most unpredictable time series whose autocorrelation function agrees with the known values. This assumption, which corresponds to the concept of maximum entropy as used in both statistical mechanics and information theory, is maximally non-committal with regard to the unknown values of the autocorrelation function of the time series. It is simply the application of maximum entropy modeling to any type of spectrum and is used in all fields where data is presented in spectral form. The usefulness of the technique varies based on the source of the spectral data since it is dependent on the amount of assumed knowledge about the spectrum that can be applied to the model. In maximum entropy modeling, probability distributions are created on the basis of that which is known, leading to a type of statistical inference about the missing information which is called the maximum entropy estimate. For example, in spectral analysis the expected peak shape is often known, but in a noisy spectrum the center of the peak may not be clear. In such a case, inputting the known information allows the maximum entropy model to derive a better estimate of the center of the peak, thus improving spectral accuracy. Method description In the periodogram approach to calculating the power spectra, the sample autocorrelation function is multiplied by some window function and then Fourier transformed. The window is applied to provide statistical stability as well as to avoid leakage from other parts of the spectrum. However, the window limits the spectral resolution. Maximum entropy method attempts to improve the spectral resolution by extrapolating the correlation function beyond the maximum lag in such a way that the entropy of the correspond
https://en.wikipedia.org/wiki/Weyl%E2%80%93von%20Neumann%20theorem
In mathematics, the Weyl–von Neumann theorem is a result in operator theory due to Hermann Weyl and John von Neumann. It states that, after the addition of a compact operator () or Hilbert–Schmidt operator () of arbitrarily small norm, a bounded self-adjoint operator or unitary operator on a Hilbert space is conjugate by a unitary operator to a diagonal operator. The results are subsumed in later generalizations for bounded normal operators due to David Berg (1971, compact perturbation) and Dan-Virgil Voiculescu (1979, Hilbert–Schmidt perturbation). The theorem and its generalizations were one of the starting points of operator K-homology, developed first by Lawrence G. Brown, Ronald Douglas and Peter Fillmore and, in greater generality, by Gennadi Kasparov. In 1958 Kuroda showed that the Weyl–von Neumann theorem is also true if the Hilbert–Schmidt class is replaced by any Schatten class Sp with p ≠ 1. For S1, the trace-class operators, the situation is quite different. The Kato–Rosenblum theorem, proved in 1957 using scattering theory, states that if two bounded self-adjoint operators differ by a trace-class operator, then their absolutely continuous parts are unitarily equivalent. In particular if a self-adjoint operator has absolutely continuous spectrum, no perturbation of it by a trace-class operator can be unitarily equivalent to a diagonal operator.
https://en.wikipedia.org/wiki/Sufficient%20statistic
In statistics, a statistic is sufficient with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the parameter". In particular, a statistic is sufficient for a family of probability distributions if the sample from which it is calculated gives no additional information than the statistic, as to which of those probability distributions is the sampling distribution. A related concept is that of linear sufficiency, which is weaker than sufficiency but can be applied in some cases where there is no sufficient statistic, although it is restricted to linear estimators. The Kolmogorov structure function deals with individual finite data; the related notion there is the algorithmic sufficient statistic. The concept is due to Sir Ronald Fisher in 1920. Stephen Stigler noted in 1973 that the concept of sufficiency had fallen out of favor in descriptive statistics because of the strong dependence on an assumption of the distributional form (see Pitman–Koopman–Darmois theorem below), but remained very important in theoretical work. Background Roughly, given a set of independent identically distributed data conditioned on an unknown parameter , a sufficient statistic is a function whose value contains all the information needed to compute any estimate of the parameter (e.g. a maximum likelihood estimate). Due to the factorization theorem (see below), for a sufficient statistic , the probability density can be written as . From this factorization, it can easily be seen that the maximum likelihood estimate of will interact with only through . Typically, the sufficient statistic is a simple function of the data, e.g. the sum of all the data points. More generally, the "unknown parameter" may represent a vector of unknown quantities or may represent everything about the model that is unknown or not fully specified. In such a case, the
https://en.wikipedia.org/wiki/Institute%20of%20Zoology
The Institute of Zoology (IoZ) is the research division of the Zoological Society of London (ZSL) in England. It is a government-funded research institute specialising in scientific issues relevant to the conservation of animal species and their habitats. The Institute is based alongside London Zoo at ZSL's Regent's Park site in the City of Westminster. The Institute has around 25 full-time research staff, plus postdoctoral research assistants, technicians and Ph.D. students. The Institute is supported by the Higher Education Funding Council for England in partnership with the University of Cambridge, and receives additional research funding from UK research councils (NERC, BBSRC, ESRC) and research charities (the Wellcome Trust and the Leverhulme Trust). Research covers many fundamental aspects of biological sciences which have relevance to the conservation of animal species and their habitats. The Institute offers research training through Ph.D. studentships, and hosts Undergraduate and Masters level research projects conducted as part of its own M.Sc. courses and courses at other institutions. Undergraduate projects are available at London Zoo and Whipsnade Zoo. See also Living Planet Index Red List Index Regional Red List EDGE of Existence Programme EDGE Species
https://en.wikipedia.org/wiki/Thermally%20activated%20delayed%20fluorescence
Thermally activated delayed fluorescence (TADF) is a process through which a molecular species in a non-emitting excited state can incorporate surrounding thermal energy to change states and only then undergo light emission. The TADF process involves an excited molecular species in a triplet state, which commonly has a forbidden transition to the ground state termed phosphorescence. By absorbing nearby thermal energy the triplet state can undergo reverse intersystem crossing (RISC) converting it to a singlet state, which can then de-excite to the ground state and emit light in a process termed fluorescence. Along with fluorescent and phosphorescent compounds, TADF compounds are one of the three main light-emitting materials used in organic light-emitting diodes (OLEDs). Another type of TADF process has been shown to originate from conformational trapping to a dark state. Thermal energy allows the repopulation of the emissive state resulting in a delayed fluorescence. History The first evidence of thermally activated delayed fluorescence in a fully organic molecule was discovered in 1961 using the compound eosin. The emission that was detected was termed "E-type" delayed fluorescence and the mechanism was not completely understood. In 1986, the TADF mechanism was further investigated and described in detail using aromatic thiones, but it was not until much later that a practical application was identified. From 2009 to 2012 Adachi and coworkers published a series of papers reporting effective TADF molecular design strategies and competitive external quantum efficiencies (EQE) for green, orange, and blue OLEDs. These publications spiked interest in the topic and TADF compounds were soon considered a possible higher efficiency alternative to traditional fluorescent and phosphorescent compounds used in lighting and displays. TADF materials are being considered the third generation of OLEDs following fluorescent and phosphorescent based devices. Mechanism The steps
https://en.wikipedia.org/wiki/Sun%20outage
A Sun outage, Sun transit, or Sun fade is an interruption in or distortion of geostationary satellite signals caused by interference (background noise) of the Sun when it falls directly behind a satellite which an Earth station is trying to receive data from or transmit data to. It usually occurs briefly to such satellites twice per year and such Earth stations install temporary or permanent guards to their receiving systems to prevent equipment damage. Sun outages occur before the March equinox (in February and March) and after the September equinox (in September and October) for the Northern Hemisphere, and occur after the March equinox and before the September equinox for the Southern Hemisphere. At these times, the apparent path of the Sun across the sky takes it directly behind the line of sight between an Earth station and a satellite. The Sun radiates strongly across the entire spectrum, including the microwave frequencies used to communicate with satellites (C band, Ku band, and Ka band), so the Sun swamps the signal from the satellite. The effects of a Sun outage range from partial degradation (increase in the error rate) to the total destruction of the signal. The effect sweeps from north to south from approximately 20 February to 20 April, and from south to north from approximately 20 August to 20 October, affecting any specific location for less than 12 minutes a day for a few consecutive days. Effect on Indian stock exchanges In India, the BSE (Bombay Stock Exchange) and NSE (National Stock Exchange) use VSATs (Very Small Aperture Terminals) for members (e.g. stockbrokers) to connect to their trading systems. VSATs depend upon satellites for connectivity between the terminals/systems. Hence, these exchanges are, with considerable predictability, affected by the annual Sun outages. Both typically close from 11:45 to 12:30 during "Sun outages" — times vary depending on the Earth's orbit and satellites' exact locations. The interference to satellites' s
https://en.wikipedia.org/wiki/Urysohn%27s%20lemma
In topology, Urysohn's lemma is a lemma that states that a topological space is normal if and only if any two disjoint closed subsets can be separated by a continuous function. Urysohn's lemma is commonly used to construct continuous functions with various properties on normal spaces. It is widely applicable since all metric spaces and all compact Hausdorff spaces are normal. The lemma is generalised by (and usually used in the proof of) the Tietze extension theorem. The lemma is named after the mathematician Pavel Samuilovich Urysohn. Discussion Two subsets and of a topological space are said to be separated by neighbourhoods if there are neighbourhoods of and of that are disjoint. In particular and are necessarily disjoint. Two plain subsets and are said to be separated by a continuous function if there exists a continuous function from into the unit interval such that for all and for all Any such function is called a Urysohn function for and In particular and are necessarily disjoint. It follows that if two subsets and are separated by a function then so are their closures. Also it follows that if two subsets and are separated by a function then and are separated by neighbourhoods. A normal space is a topological space in which any two disjoint closed sets can be separated by neighbourhoods. Urysohn's lemma states that a topological space is normal if and only if any two disjoint closed sets can be separated by a continuous function. The sets and need not be precisely separated by , i.e., it is not necessary and guaranteed that and for outside and A topological space in which every two disjoint closed subsets and are precisely separated by a continuous function is perfectly normal. Urysohn's lemma has led to the formulation of other topological properties such as the 'Tychonoff property' and 'completely Hausdorff spaces'. For example, a corollary of the lemma is that normal T1 spaces are Tychonoff. Formal statement
https://en.wikipedia.org/wiki/UL%20%28safety%20organization%29
The UL enterprise is a global safety science company headquartered in Northbrook, Illinois, composed of three organizations, UL Research Institutes, UL Standards & Engagement and UL Solutions. Established in 1894, the UL enterprise was founded as the Underwriters' Electrical Bureau (a bureau of the National Board of Fire Underwriters), and was known throughout the 20th century as Underwriters Laboratories. On January 1, 2012, Underwriters Laboratories became the parent company of a for-profit company in the U.S. named UL LLC, a limited liability corporation, which took over the product testing and certification business. On June 26, 2022, the companies rebranded into three distinct organizations that make up the UL enterprise. The company is one of several companies approved to perform safety testing by the U.S. federal agency Occupational Safety and Health Administration (OSHA). OSHA maintains a list of approved testing laboratories, which are known as Nationally Recognized Testing Laboratories. According to Lifehacker, UL Solutions is the best known product safety and certification organization globally. History Underwriters Laboratories Inc. was founded in 1894 by William Henry Merrill. After graduating from the Massachusetts Institute of Technology (MIT) with a degree in electrical engineering in 1889, Merrill went to work as an electrical inspector for the Boston Board of Fire Underwriters. At the turn of the twentieth century, fire loss was on the rise in the United States, and the increasing use of electricity in homes and businesses posed a serious threat to property and human life. In order to determine and mitigate risk, Merrill proposed to open a laboratory where he would use scientific principles to test products for fire and electrical safety. The Boston Board of Fire Underwriters turned this idea down, perhaps due to Merrill's youth and relative inexperience at the time. In May 1893, Merrill moved to Chicago to work for the Chicago Fire Underwr
https://en.wikipedia.org/wiki/Berlin%20Mathematical%20School
The Berlin Mathematical School (BMS) is a joint graduate school of the three renowned mathematics departments of the public research universities in Berlin: Freie Universität Berlin, Humboldt-Universität zu Berlin, and Technische Universität Berlin. In October 2006, the BMS was awarded one of the 18 prestigious graduate school awards by the Excellence Initiative of the German Federal Government for its innovative concept, its strong cross-disciplinary focus, and its outstanding teaching schedule tailored to the needs of students in an international environment. This was reconfirmed in June 2012 when the German Research Foundation announced that the BMS would also receive funding for a second period until 2017. Since 2019, the BMS is the graduate school in the Cluster of Excellence MATH+, which is funded by the Excellence Strategy. The BMS Chair is Jürg Kramer (HU), and the deputy Chairs are John M. Sullivan (TU) and Holger Reich (FU). Cooperation BMS students enjoy access to exclusive seminars, workshops and lectures in English not only at the participating universities, but also at their academic partners: the Research Training Groups (RTG) the International Max Planck Research Schools (IMPRS) the Zuse Institute Berlin (ZIB) the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) or the DFG Collaborative Research Centers: Discretization in Geometry (SFB 109) Scaling Cascades in Complex Systems (SFB 1114). PhD in Mathematics at the BMS The BMS PhD study program guides a student with a bachelor's degree through a structured course program, an oral qualifying exam, then directly to a doctoral degree in four to five years. Phase I is the first part of the program and includes a lecture program created specifically for the BMS and coordinated among the three universities. Applicants who hold a bachelor's degree, Vordiplom, or equivalent can apply for Phase I of the BMS. Each semester, seven to ten Basic Courses are offered in English. During Ph
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Brain%20Research
The Max Planck Institute for Brain Research is located in Frankfurt, Germany. It was founded as Kaiser Wilhelm Institute for Brain Research in Berlin 1914, moved to Frankfurt-Niederrad in 1962 and more recently in a new building in Frankfurt-Riedberg. It is one of 83 institutes in the Max Planck Society (Max Planck Gesellschaft). Research Research at the Max Planck Institute for Brain Research focuses on the operation of networks of neurons in the brain. The institute hosts three scientific departments (with directors Moritz Helmstaedter of the Helmstaedter Department, Gilles Laurent of the Laurent Department, and Erin Schuman of the Schuman Department), the Singer Emeritus Group, two Max Planck Research Groups, namely Johannes Letzkus' Neocortical Circuits Group and Tatjana Tchumatchenko's Theory of Neural Dynamics Group, as well as several additional research units. The common research goal of the Institute is a mechanistic understanding of neurons and synapses, of the structural and functional circuits which they form, of the computational rules which describe their operations, and ultimately, of their roles in driving perception and behavior. The experimental focus is on all scales required to achieve this understanding - from networks of molecules in dendritic compartments to networks of interacting brain areas. This includes interdisciplinary analyses at the molecular, cellular, multi-cellular, network and behavioral levels, often combined with theoretical approaches. History The "Kaiser-Wilhelm-Institut für Hirnforschung" (KWI for Brain Research) was founded in Berlin in 1914, making it one of the oldest institutes of the "Kaiser Wilhelm Society for the Advancement of Science", itself founded in 1911. It was based on the Neurologische Zentralstation (Neurological Center), a private research institute established by Oskar Vogt in 1898 and run together with his wife Cécile Vogt-Mugnier, also an accomplished brain researcher. From 1901 to 1910, Vogt's cowor
https://en.wikipedia.org/wiki/Palmaria%20%28alga%29
Palmaria is a genus of algae. One of its most notable members is dulse, Palmaria palmata.
https://en.wikipedia.org/wiki/Clock%20gating
In computer architecture, clock gating is a popular power management technique used in many synchronous circuits for reducing dynamic power dissipation, by removing the clock signal when the circuit is not in use or ignores clock signal. Clock gating saves power by pruning the clock tree, at the cost of adding more logic to a circuit. Pruning the clock disables portions of the circuitry so that the flip-flops in them do not have to switch states. Switching states consumes power. When not being switched, the switching power consumption goes to zero, and only leakage currents are incurred. Although asynchronous circuits by definition do not have a global "clock", the term perfect clock gating is used to illustrate how various clock gating techniques are simply approximations of the data-dependent behavior exhibited by asynchronous circuitry. As the granularity on which one gates the clock of a synchronous circuit approaches zero, the power consumption of that circuit approaches that of an asynchronous circuit: the circuit only generates logic transitions when it is actively computing. Details An alternative solution to clock gating is to use Clock Enable (CE) logic on synchronous data path employing the input multiplexer, e.g., for D type flip-flops: using C / Verilog language notation: Dff= CE? D: Q; where: Dff is D-input of D-type flip-flop, D is module information input (without CE input), Q is D-type flip-flop output. This type of clock gating is race condition free and is preferred for FPGAs designs and for clock gating of the small circuit. For FPGAs every D-type flip-flop has an additional CE input signal. Clock gating works by taking the enable conditions attached to registers, and uses them to gate the clocks. A design must contain these enable conditions in order to use and benefit from clock gating. This clock gating process can also save significant die area as well as power, since it removes large numbers of muxes and replaces them with clock gating l
https://en.wikipedia.org/wiki/Gavaksha
In Indian architecture, gavaksha or chandrashala (kudu in Tamil, also nāsī) are the terms most often used to describe the motif centred on an ogee, circular or horseshoe arch that decorates many examples of Indian rock-cut architecture and later Indian structural temples and other buildings. In its original form, the arch is shaped like the cross-section of a barrel vault. It is called a chaitya arch when used on the facade of a chaitya hall, around the single large window. In later forms it develops well beyond this type, and becomes a very flexible unit, "the most common motif of Hindu temple architecture". Gavākṣha (or gavaksa) is a Sanskrit word which means "bull's or cow's eye". In Hindu temples, their role is envisioned as symbolically radiating the light and splendour of the central icon in its sanctum. Alternatively, they are described as providing a window for the deity to gaze out into the world. Like the whole of the classic chaitya, the form originated in the shape of the wooden thatched roofs of buildings, none of which have survived; the earliest version replicating such roofs in stone is at the entrance to the non-Buddhist Lomas Rishi Cave, one of the man-made Barabar Caves in Bihar. The "chaitya arch" around the large window above the entrance frequently appears repeated as a small motif in decoration, and evolved versions continue into Hindu decoration, long after actual chaityas had ceased to be built. In these cases it can become an elaborate cartouche-like frame, spreading rather wide, around a circular or semi-circular medallion, which may contain a sculpture of a figure or head. An early stage is shown in the entrance to Cave 9 at the Ajanta Caves, where the chaitya arch window frame is repeated several times as a decorative motif. Here, and in many similar early examples, the interior of the arch in the motif contains low relief lattice imitating receding roof timbers (purlins). First stage The arched gable-end form seen at the Lomas
https://en.wikipedia.org/wiki/Kaden%20models
Kaden Nachod (and the later name, KOVAP Náchod) is the somewhat anglicized name for the Kovodružstvo Náchod toy factory in the town of Nový Hrádek in the Czech Republic. The factory, however, started making toys about 1950 when the country was still communist united Czechoslovakia. History The state factory, started about 1950, was also called by the acronym KDN which led to the spelled out and somewhat more western name Kaden. The enterprise reportedly changed its name to Kovap Náchod in 1991, though the factory's products were most commonly referred to as by the Kaden name during the 1970s and 1980s. Since about 2005, the company uses both names as different brands. Approximately through the 1960s, the KDN logo included those letters inside of two overlapping circles, like a Venn diagram. Since the 1970s, the KDN factory logo has been a stylized child on his knees playing with a vehicle with the entire logo against a black background. The newer logo is similar, but with a yellow, red and blue 'Rubik's-cube'-like graphic behind the child (with the 'E' in Kaden similarly colored). By contrast, the Kovap logo appears as a stylized 'K'. Communist toys Typically, East European toys and other more sophisticated models replicated the actual vehicles chosen by these governments for the people, the workplace, or more often, the communist party elite. Real vehicles were not manufactured as a result of research development response to market demand, thus Eastern European automobiles usually lagged far behind the west. Since there was no market, what people received was what the government deemed worthy of production. In Hungary, Poland, Czechoslovakia, East Germany, therefore, and, above all, Russia, most vehicle toys replicated the real vehicles from the state factories – operated and owned by the government. The toys, then, were also made in factories owned by the state. Sometimes the toys were made in the same factories as the real vehicles. Nevertheless, Czechoslova
https://en.wikipedia.org/wiki/Secondary%20sclerosing%20cholangitis
Secondary sclerosing cholangitis (SSC) is a chronic cholestatic liver disease. SSC is a sclerosing cholangitis with a known cause. Alternatively, if no cause can be identified, then primary sclerosing cholangitis is diagnosed. SSC is an aggressive and rare disease with complex and multiple causes. It is characterized by inflammation, fibrosis, destruction of the biliary tree and biliary cirrhosis. It can be treated with minor interventions such as continued antibiotic use and monitoring, or in more serious cases, laparoscopic surgery intervention, and possibly a liver transplant. Cause SSC is thought to develop as a consequence of known injuries or pathological processes of the biliary tree, such as biliary obstruction, surgical trauma to the bile duct, or ischemic injury to the biliary tree. Secondary causes of SSC include intraductal stone disease, surgical or blunt abdominal trauma, intra-arterial chemotherapy, and recurrent pancreatitis. It has been clearly demonstrated that sclerosing cholangitis can develop after an episode of severe bacterial cholangitis. Also it was suggested that it can result from insult to the biliary tree by obstructive cholangitis secondary to choledocholithiasis, surgical damage, trauma, vascular insults, parasites, or congenital fibrocystic disorders. Additional causes of secondary SC are toxic, due to chemical agents or drugs. Diagnosis SSC is clinically related to primary sclerosing cholangitis (PSC), but originates from a known pathological process. Diagnosis of PSC requires the exclusion of all secondary causes of sclerosing cholangitis; else, if a known aetiology can be uncovered, SSC is diagnosed. Its clinical and cholangiographic features may mimic PSC, yet its natural history may be more favorable if recognition is prompt and appropriate therapy is introduced. Sclerosing cholangitis in critically ill patients, however, is associated with rapid disease progression and poor outcome. Serologic testing, radiological imaging and
https://en.wikipedia.org/wiki/Armorial%20of%20Italy
This article presents the coats of arms of Italy. National Historical Emilia-Romagna Friuli Venezia Giulia Campania Lazio Liguria Lombardia Marche Piedmont Sardinia Sicily Tuscany Trentino-Alto Adige/Südtirol Umbria Veneto President Many of the Presidents of Italy have borne arms; either through inheritance, or via membership of foreign Orders of Chivalry, in particular, the Order of the Seraphim and the Order of the Elephant. Regions Former colonies The coats of arms of the Italian colonies. This gallery include the lesser coats of arms. The years given are for the coats of arms. See also Coat of arms of Italy Italy
https://en.wikipedia.org/wiki/Organization%20of%20Biological%20Field%20Stations
The Organization of Biological Field Stations (OBFS) is a nonprofit multinational organization representing the field stations and research centers across Canada, United States, and Central America. While it has no administrative or management control over its member stations, it helps to improve their effectiveness in research, education, and outreach through various initiatives. This includes promoting the establishment of research networks, working with public agencies to enhance funding sources, and building interactions between scientists and policy makers. The OBFS collaborates with the National Center for Ecological Analysis and Synthesis (NCEAS), the University of California Natural Reserve System (UC NRS), and the Long Term Ecological Research Network Office in maintaining a comprehensive registry of scientific data sets which may be used in future research projects. Since its establishment in 1963, the organization has grown to nearly two hundred member stations. With the success, the International Organization of Biological Field Stations (IOBFS) was later created to facilitate the exchange of information and ideas at a larger geographic scale.
https://en.wikipedia.org/wiki/ScripTalk
ScripTalk is an audible medication label technology designed to give access to individuals who are blind, visually impaired, or print impaired. It consists of a device and a microchip attached to the bottom of a prescription drug bottle. The label information is encoded on a Radio-frequency identification (RFID) electronic label (microchip) using the ScriptAbility software by a pharmacist and placed on the prescription package. ScripTalk prescription labels were introduced in the early 2000s. As of 2020, the technology was applied through the United States and Canada. Background In 1996, Philip Raistrick and David Raistrick founded En-Vision America, which is now based in Palmetto, FL. In 2000, the father and son invented and patented the Audible Prescription Reading Device and Labeling System for individuals who are visually impaired or print impaired. Shortly thereafter, the United States Department of Veterans Affairs (VA) began to test the technology for blinded veterans. ScripTalk was approved for use by the VA in 2004 and began being integrated in VA hospitals across the US. In 2012, Walmart introduced the ScripTalk service through a pilot program. and by 2019, the company was rolling out the ScripTalk service throughout all Walmart and Sam's Club locations and via mail orders. Among other pharmacy and retail chains that have integrated ScripTalk are CVS, Costco, Albertsons, Kaiser Permanente, Veteran's Administration, Winn Dixie and more. In February 2020, the ScripTalk technology was rolling out in Canada through Empire Company Limited, parent company to Sobeys, at its 420 pharmacy locations throughout the country, including Sobeys, Safeway, IGA, Foodland, Farm Boy, FreshCo, Thrifty Foods and Lawtons Drug. A number of the states in the US, including Oregon and Nevada introduced laws obliging pharmaceutical companies to supply blind and visually impaired patients with the prescription reading devices such as ScripTalk. The RFID ScripTalk label technology w
https://en.wikipedia.org/wiki/Planetary%20mass
In astronomy, planetary mass is a measure of the mass of a planet-like astronomical object. Within the Solar System, planets are usually measured in the astronomical system of units, where the unit of mass is the solar mass (), the mass of the Sun. In the study of extrasolar planets, the unit of measure is typically the mass of Jupiter () for large gas giant planets, and the mass of Earth () for smaller rocky terrestrial planets. The mass of a planet within the Solar System is an adjusted parameter in the preparation of ephemerides. There are three variations of how planetary mass can be calculated: If the planet has natural satellites, its mass can be calculated using Newton's law of universal gravitation to derive a generalization of Kepler's third law that includes the mass of the planet and its moon. This permitted an early measurement of Jupiter's mass, as measured in units of the solar mass. The mass of a planet can be inferred from its effect on the orbits of other planets. In 1931-1948 flawed applications of this method led to incorrect calculations of the mass of Pluto. Data from influence collected from the orbits of space probes can be used. Examples include Voyager probes to the outer planets and the MESSENGER spacecraft to Mercury. Also, numerous other methods can give reasonable approximations. For instance, Varuna, a potential dwarf planet, rotates very quickly upon its axis, as does the dwarf planet Haumea. Haumea has to have a very high density in order not to be ripped apart by centrifugal forces. Through some calculations, one can place a limit on the object's density. Thus, if the object's size is known, a limit on the mass can be determined. See the links in the aforementioned articles for more details on this. Choice of units The choice of solar mass, , as the basic unit for planetary mass comes directly from the calculations used to determine planetary mass. In the most precise case, that of the Earth itself, the mass is known in term
https://en.wikipedia.org/wiki/Thysania%20zenobia
Thysania zenobia, the owl moth, is a species of moth in the family Erebidae. The species was first described by Pieter Cramer in 1776, and is native to North and South America and the Caribbean. Description Upperside: Antennae setaceous and dark brown. Head the same. Thorax and abdomen grey: having a tuft of black hairs standing between them. General colour grey, faintly tinged with red. Anterior wings with a remarkable irregular black bar running from the tips to the shoulders, crossing the thorax horizontally, and parallel with the anterior edges; on the middle of this edge is a triangular dark brown spot edged with black, and nearer the body is a smaller one of the same shape and colour: a second narrower black line is situate about half an inch below, and parallel with the first, rising on the posterior edges, and extending across the wings almost to the external ones. Posterior wings with a black irregular bar arising near the external corners, and crossing them in a straight direction, meeting at the extremity of the abdomen; just above this, and almost close to it, is a very small and narrow waved black line running parallel with it, but towards the end suddenly turns off, and reaches the anterior edges. Besides the above markings there are a number of lighter and darker shades interspersed on the different parts of the wings. Underside: Palpi reddish, the extremities brown. Tongue spiral. Legs dark brown, mottled with red. Breast, abdomen, and sides red. Wings greyish red, with black indented lines and bars running parallel with the edges of the wings, and regularly placed one above another. Anterior wings having a black spot near their centre shaped like a kidney bean, with a small round one at a little distance nearer the body. Posterior having likewise a small black spot about half an inch from the base. Margins of the wings rather deeply scolloped. Wingspan inches (140 mm). See also Other moths which are called "owl moth" include: Acanthobrahmaea eu
https://en.wikipedia.org/wiki/Stream%20Reservation%20Protocol
Stream Reservation Protocol (SRP) is an enhancement to Ethernet that implements admission control. In September 2010 SRP was standardized as IEEE 802.1Qat which has subsequently been incorporated into IEEE 802.1Q-2011. SRP defines the concept of streams at layer 2 of the OSI model. Also provided is a mechanism for end-to-end management of the streams' resources, to guarantee quality of service (QoS). SRP is part of the IEEE Audio Video Bridging (AVB) and Time-Sensitive Networking (TSN) standards. The SRP technical group started work in September 2006 and finished meetings in 2009. Description SRP registers a stream and reserves the resources required through the entire path taken by the stream, based on the bandwidth requirement and the latency which are defined by a stream reservation traffic class. Listener (stream destination) and Talker (stream source) primitives are utilized. Listeners indicate what streams are to be received, and Talkers announce the streams that can be supplied by a bridged entity. Network resources are allocated and configured in both the end nodes of the data stream and the transit nodes along the data streams' path. An end-to-end signaling mechanism to detect the success/failure of the effort is also provided. SRP "talker advertise" message includes QoS requirements (e.g., VLAN ID and Priority Code Point (PCP) to define traffic class, rank (emergency or nonemergency), traffic specification (maximum frame size and maximum number of frames in a traffic class), measurement interval, and accumulated worst case latency). Static across network: StreamID (48-bit MAC address plus a 16-bit UniqueID) Stream destination address (or a multicast group MAC address) VLAN ID (used by MVRP) Priority (PCP) Rank Traffic specification Maximum frame size Maximum number of frames (per measurement interval) Measurement interval Adjusted per each hop: Accumulated latency Failure Information (Bridge ID and failure code) Required bandwidth is calc
https://en.wikipedia.org/wiki/H4K5ac
H4K5ac is an epigenetic modification to the DNA packaging protein histone H4. It is a mark that indicates the acetylation at the 5th lysine residue of the histone H4 protein. H4K5 is the closest lysine residue to the N-terminal tail of histone H4. It is enriched at the transcription start site (TSS) and along gene bodies. Acetylation of histone H4K5 and H4K12ac is enriched at centromeres. Nomenclature H4K5ac indicates acetylation of lysine 5 on histone H4 protein subunit: Histone modifications The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3. H4 histone H4 modifications are not as well known as H3's and H4 has fewer variations which might explain their important function. H4K5ac H4K5 is acetylated by TIP60 and CBP/p300 proteins. CAP/p300 open transcriptional start site chromatin by acetylating histones. H4K5ac has also been implicated in epigenetic bookmarking which allows gene expression patterns to be faithfully passed to daughter cells through mitosis. Important cell-type specific genes are marked in some way that prevents them from being compacted during mitosis and ensures their rapid transcription. H4K5ac appears to prime activity-dependent genes expressed during learning. Lysine acetylation and deacetylation Proteins are typically acetylated on lysine residues and this reaction relies on acetyl-coenzyme A as the acetyl group donor.
https://en.wikipedia.org/wiki/L.O.L.%3A%20Lack%20of%20Love
L.O.L: Lack Of Love is an evolutionary life simulation game developed by Love-de-Lic and published by ASCII Corporation for the Sega Dreamcast. The game was released only in Japan on November 2, 2000. The game was never exported to the West but it received a fan translation in 2020. Gameplay The gameplay of L.O.L.: Lack of Love revolves around the player's control of a single creature placed on an alien planet during robotic terraforming. The player must cause the creature to metamorphose into new forms by communicating with other living creatures, establishing symbiotic relationships with them, and thus helping them. The game is non-linear, lacking a HUD almost entirely and requiring the player to simply remain alive. This can be done by helping, or eating other creatures, as well as performing various bodily functions including sleeping and urinating. Development L.O.L.: Lack of Love is the last in a trio of games developed by Love-de-Lic after Moon: Remix RPG Adventure and UFO: A Day in the Life. It was directed by Kenichi Nishi and produced by Hiroshi Suzuki. The game was designed by Keita Eto and Yoshiro Kimura, the latter of whom had already left Love-de-Lic and began working on other projects with his company Punchline. The musical score for L.O.L.: Lack of Love was created by film composer Ryuichi Sakamoto, who was also the game's scenario writer. Nishi and Sakamoto met at Club Eden via a mutual friend and, through a series of e-mails, began discussing James Lovelock’s Gaia hypothesis. This theory states that the earth's living organisms function in harmony and respond to ecological changes in order for the planet to sustain life. Nishi explained about the game's message: "We should care for other people, life, the environment and nature. Sakamoto came up with the title. We wanted to question the way in which our lifestyle lacks love". The team first considered developing the game for the PlayStation, but Nishi was convinced by Sega's president to develo
https://en.wikipedia.org/wiki/Medial%20epicondyle%20of%20the%20humerus
The medial epicondyle of the humerus is an epicondyle of the humerus bone of the upper arm in humans. It is larger and more prominent than the lateral epicondyle and is directed slightly more posteriorly in the anatomical position. In birds, where the arm is somewhat rotated compared to other tetrapods, it is called the ventral epicondyle of the humerus. In comparative anatomy, the more neutral term entepicondyle is used. The medial epicondyle gives attachment to the ulnar collateral ligament of elbow joint, to the pronator teres, and to a common tendon of origin (the common flexor tendon) of some of the flexor muscles of the forearm: the flexor carpi radialis, the flexor carpi ulnaris, the flexor digitorum superficialis, and the palmaris longus. The medial epicondyle is located on the distal end of the humerus. Additionally, the medial epicondyle is inferior to the medial supracondylar ridge. It is also proximal to the olecranon fossa. The medial epicondyle protects the ulnar nerve, which runs in a groove on the back of this epicondyle. The ulnar nerve is vulnerable because it passes close to the surface along the back of the bone. Striking the medial epicondyle causes a tingling sensation in the ulnar nerve. This response is known as striking the "funny bone". The name funny bone could be from a play on the words humorous and humerus, the bone on which the medial epicondyle is located, although according to the Oxford English Dictionary, it may refer to "the peculiar sensation experienced when it is struck". Medial epicondyle fracture of the humerus are common when falling onto an outstretched hand. Fractures Medial epicondyle fractures are common elbow injuries in children. There is considerable controversy about their treatment, with uncertainty about whether surgery to restore the natural position of the bone is better than healing in a cast. Additional images
https://en.wikipedia.org/wiki/Motifs%20in%20the%20James%20Bond%20film%20series
The James Bond series of films contain a number of repeating, distinctive motifs which date from the series' inception with Dr. No in 1962. The series consists of twenty five films produced by Eon Productions featuring the James Bond character, a fictional British Secret Service agent. The most recent instalment is No Time to Die, released in UK cinemas on 30 September 2021. There have also been two independently made features, the satirical Casino Royale, released in 1967, and the 1983 film Never Say Never Again. Whilst each element has not appeared in every Bond film, they are common threads that run through most of the films. These motifs vary from integral plot points, such as the assignment briefing sessions or the attempts to kill Bond, to enhancements of the dramatic narrative, such as music, or aspects of the visual style, such as the title sequences. These motifs may also serve to enhance excitement in the plot, through a chase sequence or for the climax of the film. Some of these—such as "Bond girls" or megalomaniac villains—have been present in all of the stories, whilst others—such as Q's gadgets or the role of M—have changed over time, often to shape or follow the contemporary zeitgeist. These elements are formulaic and the Bond films tend to follow a set pattern with only limited variety, often following within a strict order. A number of the elements were altered or removed in 2006 with the reboot of the series, Casino Royale. Some of the elements involved are a result of the production crew used in the earliest films in the series, with the work of Ken Adam, the original production designer, Maurice Binder, title designer, and John Barry, composer, continually updated and adapted as the series progressed. Opening sequences Gun barrel sequence All of the Eon Bond films feature the unique gun barrel sequence, created by graphic artist Maurice Binder, which has been called by British media historian James Chapman "the trademark motif of the serie
https://en.wikipedia.org/wiki/Euphyllophyte
The euphyllophytes are a clade of plants within the tracheophytes (the vascular plants). The group may be treated as an unranked clade, a division under the name Euphyllophyta or a subdivision under the name Euphyllophytina. The euphyllophytes are characterized by the possession of true leaves ("megaphylls"), and comprise one of two major lineages of extant vascular plants. As shown in the cladogram below, the euphyllophytes have a sister relationship to the lycopodiophytes or lycopsids. Unlike the lycopodiophytes, which consist of relatively few presently living or extant taxa, the euphyllophytes comprise the vast majority of vascular plant lineages that have evolved since both groups shared a common ancestor more than 400 million years ago. The euphyllophytes consist of two lineages, the spermatophytes or seed plants such as flowering plants (angiosperms) and gymnosperms (conifers and related groups), and the Polypodiophytes or ferns, as well as a number of extinct fossil groups. The division of the extant tracheophytes into three monophyletic lineages is supported in multiple molecular studies. Other researchers argue that phylogenies based solely on molecular data without the inclusion of carefully evaluated fossil data based on whole plant reconstructions, do not necessarily completely and accurately resolve the evolutionary history of groups like the euphyllophytes. The following cladogram shows a 2004 view of the evolutionary relationships among the taxa described above. An updated phylogeny of both living and extinct Euphyllophytes with plant taxon authors from Anderson, Anderson & Cleal 2007.
https://en.wikipedia.org/wiki/SAML%202.0
Security Assertion Markup Language 2.0 (SAML 2.0) is a version of the SAML standard for exchanging authentication and authorization identities between security domains. SAML 2.0 is an XML-based protocol that uses security tokens containing assertions to pass information about a principal (usually an end user) between a SAML authority, named an Identity Provider, and a SAML consumer, named a Service Provider. SAML 2.0 enables web-based, cross-domain single sign-on (SSO), which helps reduce the administrative overhead of distributing multiple authentication tokens to the user. SAML 2.0 was ratified as an OASIS Standard in March 2005, replacing SAML 1.1. The critical aspects of SAML 2.0 are covered in detail in the official documents SAMLCore, SAMLBind, SAMLProf, and SAMLMeta. Some 30 individuals from more than 24 companies and organizations were involved in the creation of SAML 2.0. In particular, and of special note, Liberty Alliance donated its Identity Federation Framework (ID-FF) specification to OASIS, which became the basis of the SAML 2.0 specification. Thus SAML 2.0 represents the convergence of SAML 1.1, Liberty ID-FF 1.2, and Shibboleth 1.3. SAML 2.0 assertions An assertion is a package of information that supplies zero or more statements made by a SAML authority. SAML assertions are usually made about a subject, represented by the <Subject> element. The SAML 2.0 specification defines three different kinds of assertion statements that can be created by a SAML authority. All SAML-defined statements are associated with a subject. The three kinds of assertion statements defined are as follows: Authentication Statement: The assertion subject was authenticated by a particular means at a particular time. Attribute Statement: The assertion subject is associated with the supplied attributes. Authorization Decision Statement: A request to allow the assertion subject to access the specified resource has been granted or denied. An important type of SAML asse
https://en.wikipedia.org/wiki/CAT%20RNA-binding%20domain
In molecular biology, the CAT RNA-binding domain (Co-AntiTerminator RNA-binding domain) is a protein domain found at the amino terminus of a family of transcriptional antiterminator proteins. This domain forms a dimer in the crystal structure. Transcriptional antiterminators of the BglG/SacY family are regulatory proteins that mediate the induction of sugar metabolizing operons in Gram-positive and Gram-negative bacteria. Upon activation, these proteins bind to specific targets in nascent mRNAs, thereby preventing abortive dissociation of the RNA polymerase from the DNA template.
https://en.wikipedia.org/wiki/Defence%20Nuclear%20Material
Defence Nuclear Material within the UK is defined as: Nuclear weapons (warheads) Special Nuclear Materials (SNM) New and used reactor fuel from Royal Navy submarines. Nuclear materials
https://en.wikipedia.org/wiki/Six-legged%20Soldiers
Six-Legged Soldiers: Using Insects as Weapons of War is a nonfiction scientific warfare book written by author and University of Wyoming professor, Jeffrey A. Lockwood. Published in 2008 by Oxford University Press, the book explores the history of bioterrorism, entomological warfare, biological warfare, and the prevention of agro-terrorism from the earliest times to modern threats. Lockwood, an entomologist, preceded this book with Ethical issues in biological control (1997) and Locust: The devastating rise and mysterious disappearance of the insect that shaped the American frontier (2004), among others. Summary Six-Legged Soldiers gives detailed examples of entomological warfare: using buckets of scorpions during a fortress siege, catapulting beehives ("bee bombs") across a castle wall, civilians as human guinea pigs in an effort to weaponize the plague, bombarding civilians from the air with infection-bearing insects, and assassin bugs placed on prisoners to eat away their flesh. Lockwood also describes a domestic ecoterrorism example with the 1989 threat to release the medfly (Ceratitis capitata) within California's crop belt. The last chapter highlights western nations' vulnerability to terrorist attacks. Interviewed about the book by BBC Radio 4's Today programme, the author describes how a terrorist with a suitcase could bring diseases into a country. "I think a small terrorist cell could very easily develop an insect-based weapon." Criticism In its January 2009 review, The Sunday Times criticised the book as being "scarcely scholarly" for its mixed collection of myth, legend and historical facts. Contrary to this critique, reviews from credible scholarly and scientific sources stated, "Six-Legged Soldiers is an excellent account of the effect arthropod-borne diseases have had on warfare...This book will inspire readers to understand...threats and prepare new methods to combat them." (Nature), "Lockwood thoroughly and objectively assembles an engaging chr
https://en.wikipedia.org/wiki/CER-11
CER-11 was a digital military computer, developed at Institute Mihajlo Pupin, located in Serbia, in a period between 1965 and 1966. Overview CER-11 was designed by prof.dr Tihomir Aleksic and prof.dr Nedeljko Parezanovic, along with their sci.associates ( M.Momcilovic, D.Hristovic, M.Maric, M.Hruska, P.Vrbavac et al.). . The computer was based on the transistor-diode logic circuitry and the paper tape equipments. This digital computer was used in SFRY's Army JNA until 1988. See also Tihomir Aleksic CER Computers Institute Mihajlo Pupin History of computer hardware in the SFRY Further reading Dragana Becejski-Vujaklija, Nikola Markovic(Ed): "50 Years of Computing in Serbia (HRONIKA DIGITALNIH DECENIJA)", pp. 38, 56 and 76, DIS, IMP and PC Press, Belgrade 2011. Jelica Protic, D.Ristanovic: "Building Computers in Serbia", ComSYS, vol.8, No 3, pp 549–571, June 2011. V.Paunovic, D.Hristovic, "Review and analysis of the CER computers", Proc.of the ETRAN-2000 Conference, pp. 79–82, Sokobanja June 2000. (In Serbian). Dusan Hristovic: "Razvoj računarstva u Srbiji" (Computing technology in Serbia),PHLOGISTON journal, No 18/19, pp. 89–105, Museum MNT/SANU, Belgrade, 2010/2011. CER computers One-of-a-kind computers
https://en.wikipedia.org/wiki/Neighbour-sensing%20model
The Neighbour-Sensing mathematical model of hyphal growth is a set of interactive computer models that simulate the way fungi hyphae grow in three-dimensional space. The three-dimensional simulation is an experimental tool which can be used to study the morphogenesis of fungal hyphal networks. The modelling process starts from the proposition that each hypha in the fungal mycelium generates a certain abstract field that (like known physical fields) decreases with increasing distance. Both scalar and vector fields are included in the models. The field(s) and its (their) gradient(s) are used to inform the algorithm that calculates the likelihood of branching, the angle of branching and the growth direction of each hyphal tip in the simulated mycelium. The growth vector is being informed of its surroundings so, effectively, the virtual hyphal tip is sensing the neighbouring mycelium. This is why we call it the Neighbour-Sensing model. Cross-walls in living hyphae are formed only at right angles to the long axis of the hypha. A daughter hyphal apex can only arise if a branch is initiated. So, for the fungi, hyphal branch formation is the equivalent of cell division in animals, plants and protists. The position of origin of a branch, and its direction and rate of growth are the main formative events in the development of fungal tissues and organs. Consequently, by simulating the mathematics of the control of hyphal growth and branching the Neighbour-Sensing model provides the user with a way of experimenting with features that may regulate hyphal growth patterns during morphogenesis to arrive at suggestions that could be tested with live fungi. The model was proposed by Audrius Meškauskas and David Moore in 2004 and developed using the supercomputing facilities of the University of Manchester. The key idea of this model is that all parts of the fungal mycelium have identical field generation systems, field sensing mechanisms and growth direction altering algorithms.
https://en.wikipedia.org/wiki/Metasystem%20transition
A metasystem transition is the emergence, through evolution, of a higher level of organization or control. A metasystem is formed by the integration of a number of initially independent components, such as molecules (as theorized for instance by hypercycles), cells, or individuals, and the emergence of a system steering or controlling their interactions. As such, the collective of components becomes a new, goal-directed individual, capable of acting in a coordinated way. This metasystem is more complex, more intelligent, and more flexible in its actions than the initial component systems. Prime examples are the origin of life, the transition from unicellular to multicellular organisms, the emergence of eusociality or symbolic thought. The concept of metasystem transition was introduced by the cybernetician Valentin Turchin in his 1970 book The Phenomenon of Science, and developed among others by Francis Heylighen in the Principia Cybernetica Project. Another related idea, that systems ("operators") evolve to become more complex by successive closures encapsulating components in a larger whole, is proposed in "the operator theory", developed by Gerard Jagers op Akkerhuis. Turchin has applied the concept of metasystem transition in the domain of computing, via the notion of metacompilation or supercompilation. A supercompiler is a compiler program that compiles its own code, thus increasing its own efficiency, producing a remarkable speedup in its execution. Evolutionary quanta The following is the classical sequence of metasystem transitions in the history of animal evolution according to Turchin, from the origin of animate life to sapient culture: Control of Position = Motion: the animal or agent develops the ability to control its position in space Control of Motion = Irritability: the movement of the agent is no longer given, but a reaction to elementary sensations or stimuli Control of Irritability = Reflex: different elementary sensations and their re
https://en.wikipedia.org/wiki/Room%20641A
Room 641A is a telecommunication interception facility operated by AT&T for the U.S. National Security Agency, as part of its warrantless surveillance program as authorized by the Patriot Act. The facility commenced operations in 2003 and its purpose was publicly revealed in 2006. Description Room 641A is located in the SBC Communications building at 611 Folsom Street, San Francisco, three floors of which were occupied by AT&T before SBC purchased AT&T. The room was referred to in internal AT&T documents as the SG3 [Study Group 3] Secure Room. The room measures about and contains several racks of equipment, including a Narus STA 6400, a device designed to intercept and analyze Internet communications at very high speeds. It is fed by fiber optic lines from beam splitters installed in fiber optic trunks carrying Internet backbone traffic. In the analysis of J. Scott Marcus, a former CTO for GTE and a former adviser to the Federal Communications Commission, it has access to all Internet traffic that passes through the building, and therefore "the capability to enable surveillance and analysis of internet content on a massive scale, including both overseas and purely domestic traffic." The existence of the room was revealed by former AT&T technician Mark Klein and was the subject of a 2006 class action lawsuit by the Electronic Frontier Foundation against AT&T. Klein claims he was told that similar black rooms are operated at other facilities around the country. Room 641A and the controversies surrounding it were subjects of an episode of Frontline, the current affairs documentary program on PBS. It was originally broadcast on May 15, 2007. It was also featured on PBS's NOW on March 14, 2008. The room was also covered in the PBS Nova episode "The Spy Factory". Lawsuits The Electronic Frontier Foundation (EFF) filed a class-action lawsuit against AT&T on January 31, 2006, accusing the telecommunication company of violating the law and the privacy of its customers
https://en.wikipedia.org/wiki/Ribonomics
Ribonomics is the study of ribonucleic acids (RNAs) associated with RNA-binding proteins (RBPs). The term was introduced by Robert Cedergren and colleagues who used a bioinformatic search tool to discover novel ribozymes and RNA motifs originally found in HIV. Ribonomics, like genomics or proteomics, is the large-scale, high-throughput approach to identifying subsets of RNAs by their association with proteins in cells. Since many messenger RNAs (mRNAs) are linked with multiple processes, this technique offers a facile mechanism to study the relationship of various intracellular systems. Prokaryotes co-regulate genes common to cellular processes via a polycistronic operon. Since eukaryotic transcription produces mRNA encoding proteins in a monocistronic fashion, many gene products must be concomitantly expressed (see gene expression) and translated in a timed fashion. RBPs are thought to be the molecules which physically and biochemically organize these messages to different cellular locales where they may be translated, degraded or stored. The study of transcripts associated with RBPs is therefore thought to be important in eukaryotes as a mechanism for coordinated gene regulation. The likely biochemical processes which account for this regulation are the expedited/delayed degradation of RNA. In addition to the influence on RNA half-life, translation rates are also thought to be altered by RNA-protein interactions. The Drosophila ELAV family, the Puf family in yeast, and the human La, Ro, and FMR proteins are known examples of RBPs, showing the diverse species and processes with which post-transcriptional gene regulation is associated. See also ELAVL1 ELAVL2 ELAVL3 Transcript of unknown function
https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%209
Cyclin-dependent kinase 9 or CDK9 is a cyclin-dependent kinase associated with P-TEFb. Function The protein encoded by this gene is a member of the cyclin-dependent kinase (CDK) family. CDK family members are highly similar to the gene products of S. cerevisiae cdc28, and S. pombe cdc2, and known as important cell cycle regulators. This kinase was found to be a component of the multiprotein complex TAK/P-TEFb, which is an elongation factor for RNA polymerase II-directed transcription and functions by phosphorylating the C-terminal domain of the largest subunit of RNA polymerase II. This protein forms a complex with and is regulated by its regulatory subunit cyclin T or cyclin K. HIV-1 Tat protein was found to interact with this protein and cyclin T, which suggested a possible involvement of this protein in AIDS. CDK9 is also known to associate with other proteins such as TRAF2, and be involved in differentiation of skeletal muscle. Inhibitors Based on molecular docking results, Ligands-3, 5, 14, and 16 were screened among 17 different Pyrrolone-fused benzosuberene compounds as potent and specific inhibitors without any cross-reactivity against different CDK isoforms. Analysis of MD simulations and MM-PBSA studies, revealed the binding energy profiles of all the selected complexes. Selected ligands performed better than the experimental drug candidate (Roscovitine). Ligands-5 and 16 show specificity for CDK9. These ligands are expected to possess lower risk of side effects due to their natural origin. Interactions CDK9 has been shown to interact with: Androgen receptor, CDC34 and CCNK, CCNT1, CCNT2, MYBL2, RELA, RB1, SKP1A. SUPT5H, and RNAP II.
https://en.wikipedia.org/wiki/Analysis%20on%20fractals
Analysis on fractals or calculus on fractals is a generalization of calculus on smooth manifolds to calculus on fractals. The theory describes dynamical phenomena which occur on objects modelled by fractals. It studies questions such as "how does heat diffuse in a fractal?" and "How does a fractal vibrate?" In the smooth case the operator that occurs most often in the equations modelling these questions is the Laplacian, so the starting point for the theory of analysis on fractals is to define a Laplacian on fractals. This turns out not to be a full differential operator in the usual sense but has many of the desired properties. There are a number of approaches to defining the Laplacian: probabilistic, analytical or measure theoretic. See also Time scale calculus for dynamic equations on a cantor set. Differential geometry Discrete differential geometry Abstract differential geometry
https://en.wikipedia.org/wiki/Internal%20working%20model%20of%20attachment
Internal working model of attachment is a psychological approach that attempts to describe the development of mental representations, specifically the worthiness of the self and expectations of others' reactions to the self. This model is a result of interactions with primary caregivers which become internalized, and is therefore an automatic process. John Bowlby implemented this model in his attachment theory in order to explain how infants act in accordance with these mental representations. It is an important aspect of general attachment theory. Such internal working models guide future behavior as they generate expectations of how attachment figures will respond to one's behavior. For example, a parent rejecting the child's need for care conveys that close relationships should be avoided in general, resulting in maladaptive attachment styles. Influences The most influential figure for the idea of the internal working model of attachment is Bowlby, who laid the groundwork for the concept in the 1960s. He was inspired by both psychoanalysis, especially object relations theory, and more recent research into ethology, evolution and information-processing. In psychoanalytic theory, there has been the idea of an inner or representational world (proposed by Freud) as well as the internalization of relationships (Fairbairn, Winnicott). According to Freud first schemata evolve out of experiences regarding need fulfilment via the attachment figure. He argued that the resulting mental representation is an internal copy of the external world made up from memories, and thinking serves the role of experimental action. Fairbairn and Winnicott proposed that these early patterns of relationships become internalized and govern future relationships. However, the ethological-evolutionary aspects of the theory received more attention. Bowlby was interested in separation distress, and bonding in animals. He noticed that many infant behaviours are organized around the goal of mai
https://en.wikipedia.org/wiki/Free-air%20concentration%20enrichment
Free-Air Carbon dioxide Enrichment (FACE) is a method used by ecologists and plant biologists that raises the concentration of in a specified area and allows the response of plant growth to be measured. Experiments using FACE are required because most studies looking at the effect of elevated concentrations have been conducted in labs and where there are many missing factors including plant competition. Measuring the effect of elevated using FACE is a more natural way of estimating how plant growth will change in the future as the concentration rises in the atmosphere. FACE also allows the effect of elevated on plants that cannot be grown in small spaces (trees for example) to be measured. However, FACE experiments carry significantly higher costs relative to greenhouse experiments. Method Horizontal or vertical pipes are placed in a circle around the experimental plot, which can be between 1m and 30m in diameter, and these emit enriched air around the plants. The concentration of is maintained at the desired level through placing sensors in the plot which feedback to a computer which then adjusts the flow of from the pipes. Usage FACE circles have been used across in parts of the United States in temperate forests and also in stands of aspen in Italy. The method is also utilized for agricultural research. For example, FACE circles have been used to measure the response of soybean plants to increased levels of ozone and carbon dioxide at research facilities at the University of Illinois at Urbana–Champaign. FACE technologies have yet to be implemented in old growth forests, or key biomes for carbon sequestration, such as tropical forests, or boreal forests and identifying future research priorities for these regions is considered an urgent concern. Examples of this method being used globally include TasFACE, which is investigating the effects of elevated CO2 on a native grassland in Tasmania, Australia. The National Wheat FACE array is presently being es
https://en.wikipedia.org/wiki/Nonpathogenic%20organisms
Nonpathogenic organisms are those that do not cause disease, harm or death to another organism. The term is usually used to describe bacteria. It describes a property of a bacterium – its inability to cause disease. Most bacteria are nonpathogenic. It can describe the presence of non-disease causing bacteria that normally reside on the surface of vertebrates and invertebrates as commensals. Some nonpathogenic microorganisms are commensals on and inside the body of animals and are called microbiota. Some of these same nonpathogenic microorganisms have the potential to cause disease, or being pathogenic, if they enter the body, multiply and cause symptoms of infection. Immunocompromised individuals are especially vulnerable to bacteria that are typically nonpathogenic; because of a compromised immune system, disease occurs when these bacteria gain access to the body's interior. Genes have been identified that predispose disease and infection with nonpathogenic bacteria by a small number of persons. Nonpathogenic Escherichia coli strains normally found in the gastrointestinal tract have the ability to stimulate the immune response in humans, though further studies are needed to determine clinical applications. A particular strain of bacteria can be nonpathogenic in one species but pathogenic in another. One species of bacterium can have many different types or strains. One strain of a bacterium species can be nonpathogenic and another strain of the same species can be pathogenic.
https://en.wikipedia.org/wiki/Hasegawa%E2%80%93Mima%20equation
In plasma physics, the Hasegawa–Mima equation, named after Akira Hasegawa and Kunioki Mima, is an equation that describes a certain regime of plasma, where the time scales are very fast, and the distance scale in the direction of the magnetic field is long. In particular the equation is useful for describing turbulence in some tokamaks. The equation was introduced in Hasegawa and Mima's paper submitted in 1977 to Physics of Fluids, where they compared it to the results of the ATC tokamak. Assumptions The magnetic field is large enough that: for all quantities of interest. When the particles in the plasma are moving through a magnetic field, they spin in a circle around the magnetic field. The frequency of oscillation, known as the cyclotron frequency or gyrofrequency, is directly proportional to the magnetic field. The particle density follows the quasineutrality condition: where Z is the number of protons in the ions. If we are talking about hydrogen Z = 1, and n is the same for both species. This condition is true as long as the electrons can shield out electric fields. A cloud of electrons will surround any charge with an approximate radius known as the Debye length. For that reason this approximation means the size scale is much larger than the Debye length. The ion particle density can be expressed by a first order term that is the density defined by the quasineutrality condition equation, and a second order term which is how much it differs from the equation. The first order ion particle density is a function of position, but not time. This means that perturbations of the particle density change at a timescale much slower than the scale of interest. The second order particle density which causes a charge density and thus an electric potential can change with time. The magnetic field, B must be uniform in space, and not be a function of time. The magnetic field also moves at a timescale much slower than the scale of interest. This allows the time deri
https://en.wikipedia.org/wiki/Pultenaea%20sp.%20Genowlan%20Point
{{Automatic taxobox |name = Pultenaea sp. Genowlan Point |status = CR |status_system = EPBC |status_ref = |taxon = Pultenaea |species_text = P. sp. Genowlan Point |binomial_text = Pultenaea sp. Genowlan Point |authority = }}Pultenaea'' sp. Genowlan Point''' is a critically endangered undescribed species of flowering plant in the family Fabaceae–Faboideae. It is only known from one population at Genowlan Point () in the Capertee Valley within the Rylstone local government area of New South Wales.
https://en.wikipedia.org/wiki/Hypotenuse
In geometry, a hypotenuse is the longest side of a right-angled triangle, the side opposite the right angle. The length of the hypotenuse can be found using the Pythagorean theorem, which states that the square of the length of the hypotenuse equals the sum of the squares of the lengths of the other two sides. For example, if one of the other sides has a length of 3 (when squared, 9) and the other has a length of 4 (when squared, 16), then their squares add up to 25. The length of the hypotenuse is the square root of 25, that is, 5. Etymology The word hypotenuse is derived from Greek (sc. or ), meaning "[side] subtending the right angle" (Apollodorus), hupoteinousa being the feminine present active participle of the verb hupo-teinō "to stretch below, to subtend", from teinō "to stretch, extend". The nominalised participle, , was used for the hypotenuse of a triangle in the 4th century BCE (attested in Plato, Timaeus 54d). The Greek term was loaned into Late Latin, as hypotēnūsa. The spelling in -e, as hypotenuse, is French in origin (Estienne de La Roche 1520). Calculating the hypotenuse The length of the hypotenuse can be calculated using the square root function implied by the Pythagorean theorem. Using the common notation that the length of the two legs (or catheti) of the triangle (the sides perpendicular to each other) are a and b and that of the hypotenuse is c, we have The Pythagorean theorem, and hence this length, can also be derived from the law of cosines by observing that the angle opposite the hypotenuse is 90° and noting that its cosine is 0: Many computer languages support the ISO C standard function hypot(x,y), which returns the value above. The function is designed not to fail where the straightforward calculation might overflow or underflow and can be slightly more accurate and sometimes significantly slower. Some scientific calculators provide a function to convert from rectangular coordinates to polar coordinates. This gives both the
https://en.wikipedia.org/wiki/Catoptrics
Catoptrics (from katoptrikós, "specular", from katoptron "mirror") deals with the phenomena of reflected light and image-forming optical systems using mirrors. A catoptric system is also called a catopter (catoptre). Ancient texts Catoptrics is the title of two texts from ancient Greece: The Pseudo-Euclidean Catoptrics. This book is attributed to Euclid, although the contents are a mixture of work dating from Euclid's time together with work which dates to the Roman period. It has been argued that the book may have been compiled by the 4th century mathematician Theon of Alexandria. The book covers the mathematical theory of mirrors, particularly the images formed by plane and spherical concave mirrors. Hero's Catoptrics. Written by Hero of Alexandria, this work concerns the practical application of mirrors for visual effects. In the Middle Ages, this work was falsely ascribed to Ptolemy. It only survives in a Latin translation. The Latin translation of Alhazen's (Ibn al-Haytham) main work, Book of Optics (Kitab al-Manazir), exerted a great influence on Western science: for example, on the work of Roger Bacon, who cites him by name. His research in catoptrics (the study of optical systems using mirrors) centred on spherical and parabolic mirrors and spherical aberration. He made the observation that the ratio between the angle of incidence and refraction does not remain constant, and investigated the magnifying power of a lens. His work on catoptrics also contains the problem known as "Alhazen's problem". Alhazen's work influenced Averroes' writings on optics, and his legacy was further advanced through the 'reforming' of his Optics by Persian scientist Kamal al-Din al-Farisi (d. ca. 1320) in the latter's Kitab Tanqih al-Manazir (The Revision of [Ibn al-Haytham's] Optics). Catoptric telescopes The first practical catoptric telescope (the "Newtonian reflector") was built by Isaac Newton as a solution to the problem of chromatic aberration exhibited in telescopes
https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20780b
Zinc finger protein 780B is a protein that in humans is encoded by the ZNF780B gene.
https://en.wikipedia.org/wiki/Starvation%20response
Starvation response in animals (including humans) is a set of adaptive biochemical and physiological changes, triggered by lack of food or extreme weight loss, in which the body seeks to conserve energy by reducing the amount of food energy it consumes. Equivalent or closely related terms include famine response, starvation mode, famine mode, starvation resistance, starvation tolerance, adapted starvation, adaptive thermogenesis, fat adaptation, and metabolic adaptation. In humans Ordinarily, the body responds to reduced energy intake by burning fat reserves and consuming muscle and other tissues. Specifically, the body burns fat after first exhausting the contents of the digestive tract along with glycogen reserves stored in liver cells and after significant protein loss. After prolonged periods of starvation, the body uses the proteins within muscle tissue as a fuel source, which results in muscle mass loss. Magnitude and composition The magnitude and composition of the starvation response (i.e. metabolic adaptation) was estimated in a study of 8 individuals living in isolation in Biosphere 2 for two years. During their isolation, they gradually lost an average of 15% (range: 9–24%) of their body weight due to harsh conditions. On emerging from isolation, the eight isolated individuals were compared with a 152-person control group that initially had similar physical characteristics. On average, the starvation response of the individuals after isolation was a reduction in daily total energy expenditure. of the starvation response was explained by a reduction in fat-free mass and fat mass. An additional was explained by a reduction in fidgeting. The remaining was statistically insignificant. General The energetic requirements of a body are composed of the basal metabolic rate (BMR) and the physical activity level (ERAT, exercise-related activity thermogenesis). This caloric requirement can be met with protein, fat, carbohydrates, or a mixture of those.
https://en.wikipedia.org/wiki/Piecewise%20linear%20continuation
Simplicial continuation, or piecewise linear continuation (Allgower and Georg), is a one-parameter continuation method which is well suited to small to medium embedding spaces. The algorithm has been generalized to compute higher-dimensional manifolds by (Allgower and Gnutzman) and (Allgower and Schmidt). The algorithm for drawing contours is a simplicial continuation algorithm, and since it is easy to visualize, it serves as a good introduction to the algorithm. Contour plotting The contour plotting problem is to find the zeros (contours) of ( a smooth scalar valued function) in the square , The square is divided into small triangles, usually by introducing points at the corners of a regular square mesh , , making a table of the values of at each corner , and then dividing each square into two triangles. The value of at the corners of the triangle defines a unique Piecewise Linear interpolant to over each triangle. One way of writing this interpolant on the triangle with corners is as the set of equations The first four equations can be solved for (this maps the original triangle to a right unit triangle), then the remaining equation gives the interpolated value of . Over the whole mesh of triangles, this piecewise linear interpolant is continuous. The contour of the interpolant on an individual triangle is a line segment (it is an interval on the intersection of two planes). The equation for the line can be found, however the points where the line crosses the edges of the triangle are the endpoints of the line segment. The contour of the linear interpolant over a triangle The contour of the piecewise linear interpolant is a set of curves made up of these line segments. Any point on the edge connecting and can be written as with in , and the linear interpolant over the edge is So setting and Since this only depends on values on the edge, every triangle which shares this edge will produce the same point, so the contour wil
https://en.wikipedia.org/wiki/Conformal%20prediction
Conformal prediction (CP) is a machine learning framework for uncertainty quantification that can produce prediction regions (prediction intervals) for any underlying point predictor (where statistical, machine or deep learning) only assuming exchangeability of the data. CP works by computing nonconformity scores on previously labeled data, and using these to create prediction sets on a new (unlabeled) test data point. A transductive version of CP was first proposed in 1998 by Gammerman, Vovk, and Vapnik, and since, several variants of conformal prediction have been developed with different computational complexities, formal guarantees, and practical applications. Conformal prediction requires a user-specified significance level for which the algorithm should produce its predictions. This significance level restricts the frequency of errors that the algorithm is allowed to make. For example, a significance level of 0.1 means that the algorithm can make at most 10% erroneous predictions. To meet this requirement, the output is a set prediction, instead of a point prediction produced by standard supervised machine learning models. For classification tasks, this means that predictions are not a single class, for example 'cat', but instead a set like {'cat', 'dog'}. Depending on how good the underlying model is (how well it can discern between cats, dogs and other animals) and the specified significance level, these sets can be smaller or larger. For regression tasks, the output is prediction intervals, where a smaller significance level (fewer allowed errors) produces wider intervals which are less specific, and vice versa – more allowed errors produce tighter prediction intervals. History The conformal prediction first arose in a collaboration between Gammerman, Vovk, and Vapnik in 1998; this initial version of conformal prediction used E-values, though the version of conformal prediction best known today uses p-values and was proposed a year later by Saunders et
https://en.wikipedia.org/wiki/Biological%20Stain%20Commission
The Biological Stain Commission (BSC) is an organization that provides third-party testing and certification of dyes and a few other compounds that are used to enhance contrast in specimens examined in biological and medical laboratories. The BSC is a century-old organization well known to many thousands of scientists, worldwide but especially in N America, who buy BSC-certified stains for staining microscopic preparations and for making selective culture media for bacteria. Manufacturers and other vendors submit samples from their batches of dyes to the BSC's independent laboratory in Rochester NY. The BSC's certification label on a bottle of dye indicates that the contents are from a batch that passed the tests for chemical purity and for efficacy as a biological stain. These tests are published (Penney et al. 2002a). Changes to tests and additions to the list of stains eligible for certification are published from time to time in Biotechnic & Histochemistry and are summarized on the commission's web site. A BSC-certified stain rarely costs more than a non-certified one with the same name. The BSC is a non-profit organization, incorporated in the State of New York, for the purpose of ensuring a supply of high quality stains (mostly dyes) for use in biological and medical laboratories. Its origins date from 1922, when vendors of biological stains in the USA had exhausted their stocks of pre-war dyes imported from Germany. American dye manufacturers at that time were unable to produce products that were consistently reliable in histological microtechnique and bacteriology (Conn 1980–1981; Penney 2000). The commission's testing laboratory was initially at the Agriculture Experimental Station in Geneva, NY, directed by Harold J. Conn. Since 1947 the laboratory has been located at the University of Rochester Medical College, Rochester, NY. Currently 57 individual dyes and about 5 mixtures of different dyes are eligible for testing and certification by the BSC. The a
https://en.wikipedia.org/wiki/Trochleitis
Trochleitis is inflammation of the superior oblique tendon trochlea apparatus characterized by localized swelling, tenderness, and severe pain. This condition is an uncommon but treatable cause of periorbital pain. The trochlea is a ring-like apparatus of cartilage through which passes the tendon of the superior oblique muscle. It is located in the superior nasal orbit and functions as a pulley for the superior oblique muscle. Inflammation of the trochlear region leads to a painful syndrome with swelling and exquisite point tenderness in the upper medial rim of the orbit. A vicious cycle may ensue such that inflammation causes swelling and fraying of the tendon which then increases the friction of passing through the trochlea which in turn adds to the inflammation. Trochleitis has also been associated with triggering or worsening of migraine attacks in patients with pre-existing migraines (Yanguela, 2002). Symptoms Patients with trochleitis typically experience a dull fluctuating aching over the trochlear region developing over a few days. Some may also feel occasional sharp pains punctuating the ache. In patients with migraines, trochleitis may occur simultaneously with headache. Presentation is usually unilateral with palpable swelling over the affected area supranasal to the eye. The trochlear region is extremely tender to touch. Pain is exacerbated by eye movements looking down and inwards, and especially in supraduction (looking up) and looking outwards, which stretches the superior oblique muscle tendon. Notably, there is no restriction of extraocular movements, no diplopia, and often no apparent ocular signs such as proptosis. However, occasionally mild ptosis is found. The absence of generalized signs of orbital involvement is helpful in eliminating other more common causes of periorbital pain. Cause The cause of trochleitis is often unknown (idiopathic trochleitis), but it has been known to occur in patients with rheumatological diseases such as sys
https://en.wikipedia.org/wiki/Subject%20reduction
In type theory, a type system has the property of subject reduction (also subject evaluation, type preservation or simply preservation) if evaluation of expressions does not cause their type to change. Formally, if ⊢ e1 : τ and e1 → e2 then ⊢ e2 : τ. Intuitively, this means one would not like to write a expression, in say Haskell, of type Int, and have it evaluate to a value v, only to find out that v is a string. Together with progress, it is an important meta-theoretical property for establishing type soundness of a type system. The opposite property, if Γ ⊢ e2 : τ and e1 → e2 then Γ ⊢ e1 : τ, is called subject expansion. It often does not hold as evaluation can erase ill-typed sub-terms of an expression, resulting in a well-typed one.
https://en.wikipedia.org/wiki/List%20of%20mathematics-based%20methods
This is a list of mathematics-based methods. Adams' method (differential equations) Akra–Bazzi method (asymptotic analysis) Bisection method (root finding) Brent's method (root finding) Condorcet method (voting systems) Coombs' method (voting systems) Copeland's method (voting systems) Crank–Nicolson method (numerical analysis) D'Hondt method (voting systems) D21 – Janeček method (voting system) Discrete element method (numerical analysis) Domain decomposition method (numerical analysis) Epidemiological methods Euler's forward method Explicit and implicit methods (numerical analysis) Finite difference method (numerical analysis) Finite element method (numerical analysis) Finite volume method (numerical analysis) Highest averages method (voting systems) Method of exhaustion Method of infinite descent (number theory) Information bottleneck method Inverse chain rule method (calculus) Inverse transform sampling method (probability) Iterative method (numerical analysis) Jacobi method (linear algebra) Largest remainder method (voting systems) Level-set method Linear combination of atomic orbitals molecular orbital method (molecular orbitals) Method of characteristics Least squares method (optimization, statistics) Maximum likelihood method (statistics) Method of complements (arithmetic) Method of moving frames (differential geometry) Method of successive substitution (number theory) Monte Carlo method (computational physics, simulation) Newton's method (numerical analysis) Pemdas method (order of operation) Perturbation methods (functional analysis, quantum theory) Probabilistic method (combinatorics) Romberg's method (numerical analysis) Runge–Kutta method (numerical analysis) Sainte-Laguë method (voting systems) Schulze method (voting systems) Sequential Monte Carlo method Simplex method Spectral method (numerical analysis) Variational methods (mathematical analysis, differential equations) Welch's method See also Automatic basis function construction List of graphi
https://en.wikipedia.org/wiki/Variant%20of%20uncertain%20significance
A variant of uncertain (or unknown) significance (VUS) is a genetic variant that has been identified through genetic testing but whose significance to the function or health of an organism is not known. Two related terms are "gene of uncertain significance" (GUS), which refers to a gene that has been identified through genome sequencing but whose connection to a human disease has not been established, and "insignificant mutation", referring to a gene variant that has no impact on the health or function of an organism. The term "variant' is favored in clinical practice over "mutation" because it can be used to describe an allele more precisely (i.e. without inherently connoting pathogenicity). When the variant has no impact on health, it is called a "benign variant". When it is associated with a disease, it is called a "pathogenic variant". A "pharmacogenomic variant" has an effect only when an individual takes a particular drug and therefore is neither benign nor pathogenic. A VUS is most commonly encountered by people when they get the results of a lab test looking for a mutation in a particular gene. For example, many people know that mutations in the BRCA1 gene are involved in the development of breast cancer because of the publicity surrounding Angelina Jolie's preventative treatment. Few people are aware of the immense number of other genetic variants in and around BRCA1 and other genes that may predispose to hereditary breast and ovarian cancer. A recent study of the genes ATM, BRCA1, BRCA2, CDH1, CHEK2, PALB2 and TP53 found 15,311 DNA sequence variants in only 102 patients. Many of those 15,311 variants have no significant phenotypic effect. That is, a difference can be seen in the DNA sequence, but the differences have no effect on the growth or health of the person. Identifying variants that are significant or likely to be significant is a difficult task that may require expert human and in silico analysis, laboratory experiments and even information theo
https://en.wikipedia.org/wiki/NextWave%20Wireless
NextWave Wireless Inc. is a wireless technology company that produces mobile multimedia solutions and speculates in the wireless spectrum market. The company consists principally of various wireless spectrum holdings. The company is most notable for successfully suing the U.S. government for improperly seizing its assets while under bankruptcy protection. AT&T announced its acquisition of NextWave in 2012. History The company original spun out of QUALCOMM in 1995 and began life as the biggest bidder in the FCC C-Block. NextWave originally won the licenses in an auction intended for small businesses with limited resources in 1996. NextWave, which bid $4.7 billion for the licenses, made the minimum 10 percent down payment of $500 million for the spectrum. But shortly thereafter NextWave filed for bankruptcy protection and defaulted on its payments for the licenses. The FCC, in turn, confiscated the licenses and re-sold them to Verizon Wireless and the subsidiaries of AT&T Wireless and Cingular Wireless, among others, for $17 billion in an auction that ended in January 2001. Ultimately NextWave prevailed in the Supreme Court, 8-1, and was permitted to keep the PCS licenses. NextWave's bankruptcy protection lasted approximately ten years, during which time the asset value of the licenses had dramatically increased and NextWave was able to repay the original debt and sell their spectrum assets to Verizon Wireless, Cingular (now AT&T) and MetroPCS. They re-emerged as NextWave Wireless with $550M in capital. The reborn company had several areas of focus: development of a 4G broadband network through its Network Solutions Group in Las Vegas, NV, development of WiMAX baseband and RF integrated circuits and related technology in its Advanced Technology Group in San Diego, CA, and accumulation of spectrum and other carrier assets both in the U.S. and internationally. NextWave made several significant acquisitions that shaped its business and technology strategy. P
https://en.wikipedia.org/wiki/Kelch%20motif
The Kelch motif is a region of protein sequence found widely in proteins from bacteria and eukaryotes. This sequence motif is composed of about 50 amino acid residues which form a structure of a four stranded beta-sheet "blade". This sequence motif is found in between five and eight tandem copies per protein which fold together to form a larger circular solenoid structure called a beta-propeller domain. Proteins containing Kelch motifs The Kelch motif is widely found in eukaryotic and bacterial species. Notably the human genome contains around 100 proteins containing the Kelch motif. Within individual proteins the motif occurs multiple times. For example, the motif appears 6 times in Drosophila egg-chamber regulatory protein. The motif is also found in mouse protein MIPP and in a number of poxviruses. In addition, kelch repeats have been recognised in alpha- and beta-scruin, in galactose oxidase from the fungus Dactylium dendroides, and in the Escherichia coli NanM protein, a sialic acid mutarotase. Structure The structure of galactose oxidase reveals that the repeated Kelch sequence motif corresponds to a 4-stranded anti-parallel beta-sheet motif that forms the repeat unit in a super-barrel structural fold commonly known as a beta propeller. Function The known functions of kelch-containing proteins are diverse: scruin is an actin cross-linking protein; galactose oxidase catalyses the oxidation of the hydroxyl group at the C6 position in D-galactose; neuraminidase hydrolyses sialic acid residues from glycoproteins; NanM is a sialic acid mutarotase, involved in efficient utilisation of sialic acid by bacteria; kelch may have a cytoskeletal function, as it is localised to the actin-rich ring canals that connect the 15 nurse cells to the developing oocyte in Drosophila. See also WD40 repeat Kelch protein
https://en.wikipedia.org/wiki/Global%20Telecoms%20Exploitation
Global Telecoms Exploitation is reportedly a secret British telephonic mass surveillance programme run by the British signals intelligence and computer security agency, the Government Communications Headquarters (GCHQ). Its existence was revealed along with its sister programme, Mastering the Internet, in June 2013 as part of the global surveillance disclosures by the former National Security Agency contractor Edward Snowden. See also Mass surveillance in the United Kingdom Mastering the Internet Tempora
https://en.wikipedia.org/wiki/Mary%20Clem
Mary A. Clem (née Mary A. McLaughlin; 19051979) was an American mathematician, and a human computer. She was a staff member at Iowa State University, and was recognized for inventing the “zero check” technique for detecting errors. Biography Clem was born on October 19, 1905 in the small town of Nevada, in Story County, western Iowa. She completed her high school degree and found employment for several years with the Iowa State Highway Commission and Iowa State College as a computing clerk, auditing clerk, and bookkeeper. In 1931, she joined the Mathematics Statistical Service of the Mathematics Department of Iowa State College to work as a human computer under the supervision of George Snedecor. Although she complained that mathematics was her poorest subject in high school, she was fascinated with figures and data. Most of her work was done via punch cards, both creating formulas and cards, and running accuracy checks on them. She invented the “zero check” while working in Snedecor’s lab. The “zero check” is a sum that should equal zero if all other numbers had been correctly calculated. These sums helped check for errors in computing algorithms. Clem expressed that her lack of training as a mathematician is what made her notice these sums, as they had often been overlooked by others. In 1940, Clem was advanced to be technician and chief statistical clerk in charge of the Computing Service of the Statistical Laboratory. In 1962, she transferred to the new Computation Center at Iowa State University. Clem went on the 2nd Allied Mission to Greece in 1946 as a junior statistician, and there she observed the elections. In 1952, she was a statistical consultant to the Atomic Bomb Casualty Commission in Hiroshima, Japan. Publications Homeyer, Paul G.; Clem, Mary A.; and Federer, Walter T. (1947) "Punched card and calculating machine methods for analyzing lattice experiments including lattice squares and the cubic lattice," Research Bulletin (Iowa Agriculture and
https://en.wikipedia.org/wiki/Piriform%20aperture
The piriform aperture, pyriform aperture, or anterior nasal aperture, is a pear-shaped opening in the human skull. Its long axis is vertical, and narrow end upward; in the recent state it is much contracted by the lateral nasal cartilage and the greater and lesser alar cartilages of the nose. It is bounded above by the inferior borders of the nasal bones; laterally by the thin, sharp margins which separate the anterior from the nasal surfaces of the maxilla; and below by the same borders, where they curve medialward to join each other at the anterior nasal spine.
https://en.wikipedia.org/wiki/Sport%20%28botany%29
In botany, a sport or bud sport, traditionally called lusus, is a part of a plant that shows morphological differences from the rest of the plant. Sports may differ by foliage shape or color, flowers, fruit, or branch structure. The cause is generally thought to be a chance genetic mutation. Sports with desirable characteristics are often propagated vegetatively to form new cultivars that retain the characteristics of the new morphology. Such selections are often prone to "reversion", meaning that part or all of the plant reverts to its original form. An example of a bud sport is the nectarine, at least some of which developed as a bud sport from peaches. Other common fruits resulting from a sport mutation are the red Anjou pear, the Ruby Red grapefruit, and the 'Pink Lemonade' lemon, which is a sport of the "Eureka" lemon. See also Mosaic (genetics)
https://en.wikipedia.org/wiki/Physical%20plant
Physical plant, mechanical plant or industrial plant (and where context is given, often just plant) refers to the necessary infrastructure used in operation and maintenance of a given facility. The operation of these facilities, or the department of an organization which does so, is called "plant operations" or facility management. Industrial plant should not be confused with "manufacturing plant" in the sense of "a factory". This is a holistic look at the architecture, design, equipment, and other peripheral systems linked with a plant required to operate or maintain it. Power plants Nuclear power The design and equipment in a Nuclear Power Plant, has for the most part remained stagnant over the last 30 years There are three types of reactor cooling mechanisms: “Light water reactors, Liquid Metal Reactors and High Temperature Gas-Cooled Reactors”. While for the most part equipment remains the same, there have been some minimal modifications to existing reactors improving safety and efficiency. There have also been significant design changes for all these reactors. However, they remain theoretical and unimplemented. Nuclear power plant equipment can be separated into two categories: Primary systems and Balance-Of-Plant Systems. Primary systems are equipment involved in the production and safety of nuclear power. The reactor specifically has equipment such as, reactor vessels usually surrounding the core for protection, the reactor core which holds fuel rods. It also includes reactor cooling equipment consisting of liquid cooling loops, circulating coolant. These loops are usually separate systems each having at least one pump. Other equipment includes Steam generators and pressurizers that ensures pressure in the plant is adjusted as needed. Containment equipment is the physical structure built around the reactor to protect the surroundings from reactor failure. Lastly primary systems also include Emergency core cooling Equipment and Reactor protection Equipmen
https://en.wikipedia.org/wiki/Feedback%20with%20Carry%20Shift%20Registers
In sequence design, a Feedback with Carry Shift Register (or FCSR) is the arithmetic or with carry analog of a linear-feedback shift register (LFSR). If is an integer, then an N-ary FCSR of length is a finite state device with a state consisting of a vector of elements in and an integer . The state change operation is determined by a set of coefficients and is defined as follows: compute . Express s as with in . Then the new state is . By iterating the state change an FCSR generates an infinite, eventually periodic sequence of numbers in . FCSRs have been used in the design of stream ciphers (such as the F-FCSR generator), in the cryptanalysis of the summation combiner stream cipher (the reason Goresky and Klapper invented them), and in generating pseudorandom numbers for quasi-Monte Carlo (under the name Multiply With Carry (MWC) generator - invented by Couture and L'Ecuyer,) generalizing work of Marsaglia and Zaman. FCSRs are analyzed using number theory. Associated with the FCSR is a connection integer . Associated with the output sequence is the N-adic number The fundamental theorem of FCSRs says that there is an integer so that , a rational number. The output sequence is strictly periodic if and only if is between and . It is possible to express u as a simple quadratic polynomial involving the initial state and the qi. There is also an exponential representation of FCSRs: if is the inverse of , and the output sequence is strictly periodic, then , where is an integer. It follows that the period is at most the order of in the multiplicative group of units modulo . This is maximized when is prime and is a primitive element modulo . In this case, the period is . In this case the output sequence is called an l-sequence (for "long sequence"). l-sequences have many excellent statistical properties that make them candidates for use in applications, including near uniform distribution of sub-blocks, ideal arithmetic autocorrelations, and th
https://en.wikipedia.org/wiki/Longest%20increasing%20subsequence
In computer science, the longest increasing subsequence problem aims to find a subsequence of a given sequence in which the subsequence's elements are sorted in an ascending order and in which the subsequence is as long as possible. This subsequence is not necessarily contiguous or unique. The longest increasing subsequences are studied in the context of various disciplines related to mathematics, including algorithmics, random matrix theory, representation theory, and physics. The longest increasing subsequence problem is solvable in time where denotes the length of the input sequence. Example In the first 16 terms of the binary Van der Corput sequence 0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15 one of the longest increasing subsequences is 0, 2, 6, 9, 11, 15. This subsequence has length six; the input sequence has no seven-member increasing subsequences. The longest increasing subsequence in this example is not the only solution: for instance, 0, 4, 6, 9, 11, 15 0, 2, 6, 9, 13, 15 0, 4, 6, 9, 13, 15 are other increasing subsequences of equal length in the same input sequence. Relations to other algorithmic problems The longest increasing subsequence problem is closely related to the longest common subsequence problem, which has a quadratic time dynamic programming solution: the longest increasing subsequence of a sequence is the longest common subsequence of and where is the result of sorting However, for the special case in which the input is a permutation of the integers this approach can be made much more efficient, leading to time bounds of the form The largest clique in a permutation graph corresponds to the longest decreasing subsequence of the permutation that defines the graph (assuming the original non-permuted sequence is sorted from lowest value to highest). Similarly, the maximum independent set in a permutation graph corresponds to the longest non-decreasing subsequence. Therefore, longest increasing subsequence algorithms can be
https://en.wikipedia.org/wiki/Cooling%20bath
A cooling bath or ice bath, in laboratory chemistry practice, is a liquid mixture which is used to maintain low temperatures, typically between 13 °C and −196 °C. These low temperatures are used to collect liquids after distillation, to remove solvents using a rotary evaporator, or to perform a chemical reaction below room temperature (see Kinetic control). Cooling baths are generally one of two types: (a) a cold fluid (particularly liquid nitrogen, water, or even air) — but most commonly the term refers to (b) a mixture of 3 components: (1) a cooling agent (such as dry ice or ice); (2) a liquid "carrier" (such as liquid water, ethylene glycol, acetone, etc.), which transfers heat between the bath and the vessel; (3) an additive to depress the melting point of the solid/liquid system. A familiar example of this is the use of an ice/rock-salt mixture to freeze ice cream. Adding salt lowers the freezing temperature of water, lowering the minimum temperature attainable with only ice. Mixed-solvent cooling baths Mixing solvents creates cooling baths with variable freezing points. Temperatures between approximately −78 °C and −17 °C can be maintained by placing coolant into a mixture of ethylene glycol and ethanol, while mixtures of methanol and water span the −128 °C to 0 °C temperature range. Dry ice sublimes at −78 °C, while liquid nitrogen is used for colder baths. As water or ethylene glycol freeze out of the mixture, the concentration of ethanol/methanol increases. This leads to a new, lower freezing point. With dry ice, these baths will never freeze solid, as pure methanol and ethanol both freeze below −78 °C (−98 °C and −114 °C respectively). Relative to traditional cooling baths, solvent mixtures are adaptable for a wide temperature range. In addition, the solvents necessary are cheaper and less toxic than those used in traditional baths. Traditional cooling baths Water and ice baths A bath of ice and water will maintain a temperature 0 °C, since
https://en.wikipedia.org/wiki/Reflex%20bradycardia
Reflex bradycardia is a bradycardia (decrease in heart rate) in response to the baroreceptor reflex, one of the body's homeostatic mechanisms for preventing abnormal increases in blood pressure. In the presence of high mean arterial pressure, the baroreceptor reflex produces a reflex bradycardia as a method of decreasing blood pressure by decreasing cardiac output. Blood pressure (BP) is determined by cardiac output (CO) and total peripheral resistance (TPR), as represented by the formula BP = CO x TPR. Cardiac output (CO) is affected by two factors, the heart rate (HR) and the stroke volume (SV), the volume of blood pumped from one ventricle of the heart with each beat (CO = HR x SV, therefore BP = HR x SV x TPR). In reflex bradycardia, blood pressure is reduced by decreasing cardiac output (CO) via a decrease in heart rate (HR). An increase in blood pressure can be caused by increased cardiac output, increased total peripheral resistance, or both. The baroreceptors in the carotid sinus sense this increase in blood pressure and relay the information to the cardiovascular centres in the medulla oblongata. In order to maintain homeostasis, the cardiovascular centres activate the parasympathetic nervous system. Via the vagus nerve, the parasympathetic nervous system stimulates neurons that release the neurotransmitter acetylcholine (ACh) at synapses with cardiac muscle cells. Acetylcholine then binds to M2 muscarinic receptors, causing the decrease in heart rate that is referred to as reflex bradycardia. The M2 muscarinic receptors decrease the heart rate by inhibiting depolarization of the sinoatrial node via Gi protein-coupled receptors and through modulation of muscarinic potassium channels. Additionally, M2 receptors reduce the contractile forces of the atrial cardiac muscle and reduce the conduction velocity of the atrioventricular node (AV node). However, M2 receptors have no effect on the contractile forces of the ventricular muscle. Stimuli causing reflex
https://en.wikipedia.org/wiki/Enrico%20Fermi%20Prize
The Enrico Fermi Prize, first awarded in 2001, is given by the Italian Physical Society (Società Italiana di Fisica). It is a yearly award of €30,000 honoring one or more Members of the Society who have "particularly honoured physics with their discoveries." Recipients See also List of physics awards
https://en.wikipedia.org/wiki/Amorphous%20carbonia
Amorphous carbonia, also called a-carbonia or a-CO2, is an exotic amorphous solid form of carbon dioxide that is analogous to amorphous silica glass. It was first made in the laboratory in 2006 by subjecting dry ice to high pressures (40-48 gigapascal, or 400,000 to 480,000 atmospheres), in a diamond anvil cell. Amorphous carbonia is not stable at ordinary pressures—it quickly reverts to normal CO2. While normally carbon dioxide forms molecular crystals, where individual molecules are bound by Van der Waals forces, in amorphous carbonia a covalently bound three-dimensional network of atoms is formed, in a structure analogous to silicon dioxide or germanium dioxide glass. Mixtures of a-carbonia and a-silica may be a prospective very hard and stiff glass material stable at room temperature. Such glass may serve as protective coatings, e.g. in microelectronics. The discovery has implications for astrophysics, as interiors of massive planets may contain amorphous solid carbon dioxide. Notes
https://en.wikipedia.org/wiki/Analog%20device
Analog devices are a combination of both analog machine and analog media that can together measure, record, reproduce, receive or broadcast continuous information, for example, the almost infinite number of grades of transparency, voltage, resistance, rotation, or pressure. In theory, the continuous information in an analog signal has an infinite number of possible values with the only limitation on resolution being the accuracy of the analog device. Analog media are materials with analog properties, such as photographic film, which are used in analog devices, such as cameras. Example devices Non-electrical There are notable non-electrical analog devices, such as some clocks (sundials, water clocks), the astrolabe, slide rules, the governor of a steam engine, the planimeter (a simple device that measures the surface area of a closed shape), Kelvin's mechanical tide predictor, acoustic rangefinders, servomechanisms (e.g. the thermostat), a simple mercury thermometer, a weighing scale, and the speedometer. Electrical The telautograph is an analogue precursor to the modern fax machine. It transmits electrical impulses recorded by potentiometers to stepping motors attached to a pen, thus being able to reproduce a drawing or signature made by the sender at the receiver's station. It was the first such device to transmit drawings to a stationary sheet of paper; previous inventions in Europe used rotating drums to make such transmissions. An analog synthesizer is a synthesizer that uses analog circuits and analog computer techniques to generate sound electronically. The analog television encodes television and transports the picture and sound information as an analog signal, that is, by varying the amplitude and/or frequencies of the broadcast signal. All systems preceding digital television, such as NTSC, PAL, and SECAM are analog television systems. An analog computer is a form of computer that uses electrical, mechanical, or hydraulic phenomena to model the probl
https://en.wikipedia.org/wiki/Photopolymerization-based%20signal%20amplification
Photopolymerization-based signal amplification (PBA) is a method of amplifying detection signals from molecular recognition events in an immunoassay by utilizing a radical polymerization initiated through illumination by light. To contrast between a negative and a positive result, PBA is linked to a colorimetric method, thereby resulting in a change in color when a targeted analyte is detected, i.e., a positive signal. PBA is also used to quantify the concentration of the analyte by measuring intensity of the color. Method PBA is achieved by sequentially adding three kinds of solutions to a test strip and illuminating it with green light. First, a droplet of a patient’s sample is loaded on a test strip whose surface is covered with immobilized antibodies. If the sample contains the target antigens, they bind to the immobilized antibodies. (Figure 1a) Second, eosin-conjugated antibodies are added to the patient’s sample. This second antibody specifically binds with the bound antigens, thereby causing each bound antigen to be sandwiched between the first and eosin-conjugated antibodies. (Figure 1b) After ten minutes, the droplet on the surface is rinsed away in order to make sure that only the sandwiched binding complexes are left on the surface before adding the third solution. Lastly, a droplet of mixture of monomers (e.g., PEGDA and N-vinyl pyrrolidone) and phenolphthalein is added to the test strip, and the droplet is illuminated with green visible light, by which the eosin molecules become excited and produce radicals. (Figure 1c) As a result, propagation is caused and polymers are formed. Since phenolphthalein molecules are surrounded by the polymers and thus left on the surface even after another rinse, the test strip turns red when a base is added. (Figure 1d) On the other hand, if the patient’s sample does not include any targeted antigens, the sandwiched binding complexes on the surface will not be formed, which leads to no red color. Principle Regen
https://en.wikipedia.org/wiki/Microphone%20preamplifier
The term microphone preamplifier can either refer to the electronic circuitry within a microphone, or to a separate device or circuit that the microphone is connected to. In either instance, the purpose of the microphone preamplifier is the same. A microphone preamplifier is a sound engineering device that prepares a microphone signal to be processed by other equipment. Microphone signals are often too weak to be transmitted to units such as mixing consoles and recording devices with adequate quality. Preamplifiers increase a microphone signal to line level (i.e. the level of signal strength required by such devices) by providing stable gain while preventing induced noise that would otherwise distort the signal. For additional discussion of signal level, see Gain stage. A microphone preamplifier is colloquially called a microphone preamp, mic preamp, micamp, preamp (not to be confused with a control amplifier in high-fidelity reproduction equipment), mic pre and pre. Technical details The output voltage on a dynamic microphone may be very low, typically in the 1 to 100 microvolt range. A microphone preamplifier increases that level by up to 70 dB, to anywhere up to 10 volts. This stronger signal is used to drive equalization circuitry within an audio mixer, to drive external audio effects, and to sum with other signals to create an audio mix for audio recording or for live sound. Functions In addition to providing gain for the microphone signal, a microphone preamplifier as found in a sound mixer or as a discrete component typically also provides power to the microphone in the form of either 24 or 48 volt phantom power. In use A microphone is a transducer and as such is the source of much of the coloration of an audio mix. Most audio engineers would assert that a microphone preamplifier also affects the sound quality of an audio mix. A preamplifier might load the microphone with low impedance, forcing the microphone to work harder and so change its tone qual
https://en.wikipedia.org/wiki/GPXE
gPXE is an open-source Preboot eXecution Environment (PXE) client firmware implementation and bootloader derived from Etherboot. It can be used to enable computers without built-in PXE support to boot from the network, or to extend an existing client PXE implementation with support for additional protocols. While standard PXE clients use TFTP to transfer data, gPXE client firmware adds the ability to retrieve data through other protocols like HTTP, iSCSI and ATA over Ethernet (AoE), and can work with Wi-Fi rather than requiring a wired connection. gPXE development ceased in summer 2010, and several projects are migrating or considering migrating to iPXE as a result. PXE implementation gPXE can be loaded by a computer in several ways: from media like floppy disk, USB flash drive, or hard disk as a pseudo Linux kernel as an ELF image from an option ROM on a network card or embedded in a system BIOS over a network as a PXE boot image gPXE implements its own PXE stack, using a driver corresponding to the network card, or a UNDI driver if it was loaded by PXE itself. This allows to use a PXE stack even if the network card has no boot ROM, by loading gPXE from a fixed medium. Bootloader Although its basic role was to implement a PXE stack, gPXE can be used as a full-featured network bootloader. It can fetch files from multiple network protocols, such as TFTP, NFS, HTTP or FTP, and can boot PXE, ELF, Linux, FreeBSD, multiboot, EFI, NBI and Windows CE images. In addition, it is scriptable and can load COMBOOT and COM32 SYSLINUX extensions. This allows for instance to build a graphical menu for network boot. See also PXE PXELINUX iPXE
https://en.wikipedia.org/wiki/TRPM6
TRPM6 is a transient receptor potential ion channel associated with hypomagnesemia with secondary hypocalcemia. See also TRPM Ruthenium red
https://en.wikipedia.org/wiki/Circumscriptional%20name
In biological classification, circumscriptional names are taxon names that are not ruled by ICZN and are defined by the particular set of members included. Circumscriptional names are used mainly for taxa above family-group level (e. g. order or class), but can be also used for taxa of any ranks, as well as for rank-less taxa. Non-typified names other than those of the genus- or species-group constitute the majority of generally accepted names of taxa higher than superfamily. The ICZN regulates names of taxa up to family group rank (i. e. superfamily). There are no generally accepted rules of naming higher taxa (orders, classes, phyla, etc.). Under the approach of circumscription-based (circumscriptional) nomenclatures, a circumscriptional name is associated with a certain circumscription of a taxon without regard of its rank or position. Some authors advocate introducing a mandatory standardized typified nomenclature of higher taxa. They suggest all names of higher taxa to be derived in the same manner as family-group names, i.e. by modifying names of type genera with endings to reflect the rank. There is no consensus on what such higher rank endings should be. A number of established practices exist as to the use of typified names of higher taxa, depending on animal group. See also Descriptive botanical name, optional forms still used in botany for ranks above family and for a few family names
https://en.wikipedia.org/wiki/Priming%20%28immunology%29
Priming is the first contact that antigen-specific T helper cell precursors have with an antigen. It is essential to the T helper cells' subsequent interaction with B cells to produce antibodies. Priming of antigen-specific naive lymphocytes occurs when antigen is presented to them in immunogenic form (capable of inducing an immune response). Subsequently, the primed cells will differentiate either into effector cells or into memory cells that can mount stronger and faster response to second and upcoming immune challenges. T and B cell priming occurs in the secondary lymphoid organs (lymph nodes and spleen). Priming of naïve T cells requires dendritic cell antigen presentation. Priming of naive CD8 T cells generates cytotoxic T cells capable of directly killing pathogen-infected cells. CD4 cells develop into a diverse array of effector cell types depending on the nature of the signals they receive during priming. CD4 effector activity can include cytotoxicity, but more frequently it involves the secretion of a set of cytokines that directs the target cell to make a particular response. This activation of naive T cell is controlled by a variety of signals: recognition of antigen in the form of a peptide: MHC complex on the surface of a specialized antigen-presenting cell delivers signal 1; interaction of co-stimulatory molecules on antigen-presenting cells with receptors on T cells delivers signal 2 (one notable example includes a B7 ligand complex on antigen-presenting cells binding to the CD28 receptor on T cells); and cytokines that control differentiation into different types of effector cells deliver signal 3. Cross-priming Cross-priming refers to the stimulation of antigen-specific CD8+ cytotoxic T lymphocytes (CTLs) by dendritic cell presenting an antigen acquired from the outside of the cell. Cross-priming is also called immunogenic cross-presentation. This mechanism is vital for priming of CTLs against viruses and tumours. Immune priming (invertebrate im
https://en.wikipedia.org/wiki/Octanol-water%20partition%20coefficient
The n-octanol-water partition coefficient, Kow is a partition coefficient for the two-phase system consisting of n-octanol and water. Kow is also frequently referred to by the symbol P, especially in the English literature. It is also called n-octanol-water partition ratio. Kow serves as a measure of the relationship between lipophilicity (fat solubility) and hydrophilicity (water solubility) of a substance. The value is greater than one if a substance is more soluble in fat-like solvents such as n-octanol, and less than one if it is more soluble in water. If a substance is present as several chemical species in the octanol-water system due to association or dissociation, each species is assigned its own Kow value. A related value, D, does not distinguish between different species, only indicating the concentration ratio of the substance between the two phases. History In 1899, Charles Ernest Overton and Hans Horst Meyer independently proposed that the tadpole toxicity of non-ionizable organic compounds depends on their ability to partition into lipophilic compartments of cells. They further proposed the use of the partition coefficient in an olive oil/water mixture as an estimate of this lipophilic associated toxicity. Corwin Hansch later proposed the use of n-octanol as an inexpensive synthetic alcohol that could be obtained in a pure form as an alternative to olive oil. Applications Kow values are used, among others, to assess the environmental fate of persistent organic pollutants. Chemicals with high partition coefficients, for example, tend to accumulate in the fatty tissue of organisms (bioaccumulation). Under the Stockholm Convention, chemicals with a log Kow greater than 5 are considered to bioaccumulate. Furthermore, the parameter plays an important role in drug research (Rule of Five) and toxicology. Ernst Overton and Hans Meyer discovered as early as 1900 that the efficacy of an anaesthetic increased with increasing Kow value (the so-called Meye
https://en.wikipedia.org/wiki/4x4%20Evo
4x4 Evo (also re-released as 4x4 Evolution) is a video game developed by Terminal Reality for the Windows, Macintosh, Sega Dreamcast, and PlayStation 2 platforms. It is one of the first console games to have cross-platform online play where Dreamcast, Macintosh, and Windows versions of the game appear online at the same time. The game can use maps created by users to download onto a hard drive as well as a Dreamcast VMU. All versions of the game are similar in quality and gameplay although the online systems feature a mode to customize the players' own truck and use it online. The game is still online-capable on all systems except for PlayStation 2. This was Terminal Reality's only video game to be released for the Dreamcast. Gameplay Gameplay features off-road racing of over 70 licensed truck manufacturers. Modes featured in the game were Career Mode, Online Mode, Map editor, and versus mode. The career mode is the most important part of the game to feature a way to buy better trucks similar to the Gran Turismo series. The Career mode also gives the player six purpose-built race vehicles: Chevrolet TrailBlazer Race SUV 2WD, Dodge Dakota Race Truck 4WD, Ford F-150 Race Truck 2WD, Mitsubishi Pajero Rally 4WD, Nissan Xterra Race SUV 4WD, and the Toyota Tundra Race Truck 2WD. They cost anywhere from $350,000 up to $850,000. These are the fastest vehicles in the game. Recently, KC Vale acquired permission from Terminal Reality, Incorporated to upload the game to his Web server, but the original vehicles have been removed due to an expired license. Multiplayer Although this game was released many years ago, the online community still exists with a fair number of players and some moderators who manage chat rooms. Dedicated servers are long gone, but it is possible to host games over the Internet and join other player-hosted games. The game has been brought back online thanks to the Dreamcast community as one of the more than 20 games so far to be brought back online fo
https://en.wikipedia.org/wiki/Yupana
A yupana (from Quechua: yupay 'count') is a counting board used to perform arithmetic operations, dating back to the time of the Incas. Very little documentation exists concerning its precise physical form or how it was used. Types The term yupana refers to two distinct classes of objects: Table Yupana (or archaeological yupana): a system of geometric boxes of different sizes and materials. The first example of this type was found in 1869 in the Ecuadorian province of Azuay and prompted searches for more of these objects. All examples of the archaeological yupana vary greatly from each other. Some archaeological yupanas found in Manchán (an archaeological site in Casma) and Huacones-Vilcahuasi (in Cañete) were embedded into the floor. Poma de Ayala Yupana: a picture on page 360 of El primer nueva corónica y buen gobierno, written by the Amerindian chronicler Felipe Guaman Poma de Ayala shows a 5x4 chessboard (shown right). The chessboard, though resembling a table yupana, differs from this style in most notably in each of its rectangular trays have the same dimensions, while table yupanas have trays of other polygonal shapes of differing sizes. Although very different from each other, most scholars who have dealt with table yupanas have extended reasoning and theories to the Poma de Ayala yupana and vice versa, perhaps in an attempt to find a unifying thread or a common method of creation. For example, the Nueva coronica (New Chronicle) discovered in 1916 in the library of Copenhagen contained evidence that a portion of the studies on the Poma de Ayala yupana were based on previous studies and theories regarding table yupanas. History Several chroniclers of the Indies described, in brief, this Incan abacus and its operation. Felipe Guaman Poma de Ayala The first was Guaman Poma de Ayala around the year 1615 who wrote: In addition to providing this brief description, Poma de Ayala drew a picture of the yupana: a board of five rows and four columns with e
https://en.wikipedia.org/wiki/Cuzick%E2%80%93Edwards%20test
In statistics, the Cuzick–Edwards test is a significance test whose aim is to detect the possible clustering of sub-populations within a clustered or non-uniformly-spread overall population. Possible applications of the test include examining the spatial clustering of childhood leukemia and lymphoma within the general population, given that the general population is spatially clustered. The test is based on: using control locations within the general population as the basis of a second or "control" sub-population in addition to the original "case" sub-population; using "nearest-neighbour" analyses to form statistics based on either: the number of other "cases" among the neighbours of each case; the number "cases" which are nearer to each given case than the k-th nearest "control" for that case. An example application of this test was to spatial clustering of leukaemias and lymphomas among young people in New Zealand. See also Clustering (demographics)
https://en.wikipedia.org/wiki/Presenilin
Presenilins are a family of related multi-pass transmembrane proteins which constitute the catalytic subunits of the gamma-secretase intramembrane protease protein complex. They were first identified in screens for mutations causing early onset forms of familial Alzheimer's disease by Peter St George-Hyslop. Vertebrates have two presenilin genes, called PSEN1 (located on chromosome 14 in humans) that codes for presenilin 1 (PS-1) and PSEN2 (on chromosome 1 in humans) that codes for presenilin 2 (PS-2). Both genes show conservation between species, with little difference between rat and human presenilins. The nematode worm C. elegans has two genes that resemble the presenilins and appear to be functionally similar, sel-12 and hop-1. Presenilins undergo cleavage in an alpha helical region of one of the cytoplasmic loops to produce a large N-terminal and a smaller C-terminal fragment that together form part of the functional protein. Cleavage of presenilin 1 can be prevented by a mutation that causes the loss of exon 9, and results in loss of function. Presenilins play a key role in the modulation of intracellular Ca2+ involved in presynaptic neurotransmitter release and long-term potentiation induction. Structure Presenilins are transmembrane proteins with nine alpha helices. Structures have been solved of the assembled gamma secretase complex by cryo-electron microscopy, demonstrating significant conformational flexibility in the structure of the presenilin subunit of the complex in response to ligand or inhibitor binding. Presenilins undergo autocatalytic proteolytic processing after expression, cleaving a cytoplasmic loop region between the sixth and seventh helices to produce a large N-terminal and a smaller C-terminal fragment. The two fragments remain in contact with each other in the mature protein. The two catalytic aspartate active site residues required for aspartyl protease activity are located in the sixth and seventh helices. The structure and membran
https://en.wikipedia.org/wiki/Access%20network%20discovery%20and%20selection%20function
Access network discovery and selection function (ANDSF) is an entity within an evolved packet core (EPC) of the system architecture evolution (SAE) for 3GPP compliant mobile networks. The purpose of the ANDSF is to assist user equipment (UE) to discover non-3GPP access networks – such as Wi-Fi or WIMAX – that can be used for data communications in addition to 3GPP access networks (such as HSPA or LTE) and to provide the UE with rules policing the connection to these networks. Information provided An ANDSF can provide the following information to a UE, based on operator configuration: Inter-system mobility policy (ISMP)– network selections rules for a UE with no more than one active access network connection (e.g., either LTE or Wi-Fi) Inter-system routing policy (ISRP) – network selection rules for a UE with potentially more than one active access network connection (e.g., both LTE and Wi-Fi). Such UE may employ IP flow mobility (IFOM), multiple-access PDN connectivity (MAPCON) or non-seamless Wi-Fi offload according to operator policy and user preferences. Discovery information – a list of networks that may be available in the vicinity of the UE and information assisting the UE to expedite the connection to these networks The ANDSF communicates with the UE over the S14 reference point, which is essentially a synchronization of an OMA-DM management object (MO) specific to ANDSF. History The term ANDSF was first conceived by the 3GPP in Release 8 as part of the effort to standardise the behavior of the ever-growing abundance of 3GPP-compliant UEs (e.g. smartphones) that can also access non-3GPP data networks. Known implementations Smart Access Manager (SAM) by InterDigital is an Intelligent network connectivity and data traffic management application for Android and iOS devices ensuring improved end-user experience with seamless connection and authentication, while creating new revenue generating opportunities for network operators. Compliant to 3GPP ANDSF