source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Vital%20rates
Vital rates refer to how fast vital statistics change in a population (usually measured per 1000 individuals). There are 2 categories within vital rates: crude rates and refined rates. Crude rates measure vital statistics in a general population (overall change in births and deaths per 1000). Refined rates measure the change in vital statistics in a specific demographic (such as age, sex, race, etc.). Marriage rates The national marriage rates since 1972,in the US have fallen by almost 50% at six people per 1000. According to Iran Index and National Organization for Civil Registration of Iran Iranian divorce rate is in the red at its record highest level since 1979, divorce quotas were introduced to curb enthuitasim.
https://en.wikipedia.org/wiki/Flash%20pasteurization
Flash pasteurization, also called "high-temperature short-time" (HTST) processing, is a method of heat pasteurization of perishable beverages like fruit and vegetable juices, beer, wine, and some dairy products such as milk. Compared with other pasteurization processes, it maintains color and flavor better, but some cheeses were found to have varying responses to the process. Flash pasteurization is performed to kill spoilage microorganisms prior to filling containers, in order to make the products safer and to extend their shelf life compared to the unpasteurised foodstuff. For example, one manufacturer of flash pasteurizing machinery gives shelf life as "in excess of 12 months". It must be used in conjunction with sterile fill technology (similar to aseptic processing) to prevent post-pasteurization contamination. The liquid moves in a controlled, continuous flow while subjected to temperatures of 71.5 °C (160 °F) to 74 °C (165 °F), for about 15 to 30 seconds, followed by rapid cooling to between 4 °C (39.2 °F) and 5.5 °C (42 °F). The standard US protocol for flash pasteurization of milk, 71.7 °C (161 °F) for 15 seconds in order to kill Coxiella burnetii (the most heat-resistant pathogen found in raw milk), was introduced in 1933, and results in 5-log reduction (99.999%) or greater reduction in harmful bacteria. An early adopter of pasteurization was Tropicana Products, which has used the method since the 1950s. The juice company Odwalla switched from non-pasteurized to flash-pasteurized juices in 1996 after tainted apple juice containing E. coli O157:H7 sickened many children and killed one.
https://en.wikipedia.org/wiki/F-algebra
In mathematics, specifically in category theory, F-algebras generalize the notion of algebraic structure. Rewriting the algebraic laws in terms of morphisms eliminates all references to quantified elements from the axioms, and these algebraic laws may then be glued together in terms of a single functor F, the signature. F-algebras can also be used to represent data structures used in programming, such as lists and trees. The main related concepts are initial F-algebras which may serve to encapsulate the induction principle, and the dual construction F-coalgebras. Definition If is a category, and is an endofunctor of , then an -algebra is a tuple , where is an object of and is a -morphism . The object is called the carrier of the algebra. When it is permissible from context, algebras are often referred to by their carrier only instead of the tuple. A homomorphism from an -algebra to an -algebra is a -morphism such that , according to the following commutative diagram: Equipped with these morphisms, -algebras constitute a category. The dual construction are -coalgebras, which are objects together with a morphism . Examples Groups Classically, a group is a set with a group law , with , satisfying three axioms: the existence of an identity element, the existence of an inverse for each element of the group, and associativity. To put this in a categorical framework, first define the identity and inverse as functions (morphisms of the set ) by with , and with . Here denotes the set with one element , which allows one to identify elements with morphisms . It is then possible to write the axioms of a group in terms of functions (note how the existential quantifier is absent): , , . Then this can be expressed with commutative diagrams: Now use the coproduct (the disjoint union of sets) to glue the three morphisms in one: according to Thus a group is a -algebra where is the functor . However the reverse is not necessarily true. Some -algeb
https://en.wikipedia.org/wiki/Quamrul%20Hassan
Quamrul Hassan (, 1921–1988) was a Bengali artist. Hassan is referred to in Bangladesh as Potua, a word usually associated with folk artists, due to his down to earth style yet very modern in nature as he always added Cubism other than the folk style to his artworks. In addition to his artistic legacy, two of Hassan's work have come to be part of Bangladesh's political history. The first of this is a monstrous rendition of Yahya Khan, the Pakistani president who ordered genocide in Bangladesh. The second was just before his death, mocking the then dictator of Bangladesh, Hossain Mohammad Ershad. This sketch was titled Desh aaj bisshobeheyar khoppre (Our land is now in the hand of the champion of shamelessness). Early life Hassan was born in Kolkata on 2 December 1921. His father, Muhammad Hashim, was superintendent of the a local Graveyard. He belonged to a conservative family and his father always opposed to his involvement in paintings. But Quamrul's determination and love for painting made his father enroll Quamrul to Government Institute of Arts (now Government College of Art & Craft), in 1938, under the condition that Quamrul had to pay for his own tuition fee. After his enrollment to the school, Quamrul kept himself busy with not only arts but with other activities such as sports and the Bratachari movement in 1939 and he also joined ARP during the Second World War. He developed connections with the Forward Block, Gononatya Andolon (People's Theatre) and even with several leaders of the Communist Party, got involved with the task of teaching of children and teenagers and contributed to illustrations in publications. As a result of his involvement in social and cultural activities, he finished his six-year course in the Art School in nine years. He graduated in 1947. He secured first position in the B Group of the Inter College Bodybuilding competition in 1945. He had become a 'Nayak' of the Bratachari movement. After the partition of India, Quamrul, who was
https://en.wikipedia.org/wiki/Password-based%20cryptography
Password-based cryptography generally refers to two distinct classes of methods: Single-party methods Multi-party methods Single party methods Some systems attempt to derive a cryptographic key directly from a password. However, such practice is generally ill-advised when there is a threat of brute-force attack. Techniques to mitigate such attack include passphrases and iterated (deliberately slow) password-based key derivation functions such as PBKDF2 (RFC 2898). Multi-party methods Password-authenticated key agreement systems allow two or more parties that agree on a password (or password-related data) to derive shared keys without exposing the password or keys to network attack. Earlier generations of challenge–response authentication systems have also been used with passwords, but these have generally been subject to eavesdropping and/or brute-force attacks on the password. See also Password Passphrase Password-authenticated key agreement Cryptography
https://en.wikipedia.org/wiki/Asset%20allocation
Asset allocation is the implementation of an investment strategy that attempts to balance risk versus reward by adjusting the percentage of each asset in an investment portfolio according to the investor's risk tolerance, goals and investment time frame. The focus is on the characteristics of the overall portfolio. Such a strategy contrasts with an approach that focuses on individual assets. Description Many financial experts argue that asset allocation is an important factor in determining returns for an investment portfolio. Asset allocation is based on the principle that different assets perform differently in different market and economic conditions. A fundamental justification for asset allocation is the notion that different asset classes offer returns that are not perfectly correlated, hence diversification reduces the overall risk in terms of the variability of returns for a given level of expected return. Asset diversification has been described as "the only free lunch you will find in the investment game". Academic research has painstakingly explained the importance and benefits of asset allocation and the problems of active management (see academic studies section below). Although the risk is reduced as long as correlations are not perfect, it is typically forecast (wholly or in part) based on statistical relationships (like correlation and variance) that existed over some past period. Expectations for return are often derived in the same way. Studies of these forecasting methods constitute an important direction of academic research. When such backward-looking approaches are used to forecast future returns or risks using the traditional mean-variance optimization approach to the asset allocation of modern portfolio theory (MPT), the strategy is, in fact, predicting future risks and returns based on history. As there is no guarantee that past relationships will continue in the future, this is one of the "weak links" in traditional asset allocation s
https://en.wikipedia.org/wiki/Homochirality
Homochirality is a uniformity of chirality, or handedness. Objects are chiral when they cannot be superposed on their mirror images. For example, the left and right hands of a human are approximately mirror images of each other but are not their own mirror images, so they are chiral. In biology, 19 of the 20 natural amino acids are homochiral, being L-chiral (left-handed), while sugars are D-chiral (right-handed). Homochirality can also refer to enantiopure substances in which all the constituents are the same enantiomer (a right-handed or left-handed version of an atom or molecule), but some sources discourage this use of the term. It is unclear whether homochirality has a purpose; however, it appears to be a form of information storage. One suggestion is that it reduces entropy barriers in the formation of large organized molecules. It has been experimentally verified that amino acids form large aggregates in larger abundance from an enantiopure samples of the amino acid than from racemic (enantiomerically mixed) ones. It is not clear whether homochirality emerged before or after life, and many mechanisms for its origin have been proposed. Some of these models propose three distinct steps: mirror-symmetry breaking creates a minute enantiomeric imbalance, chiral amplification builds on this imbalance, and chiral transmission is the transfer of chirality from one set of molecules to another. In biology Amino acids are the building blocks of peptides and enzymes while sugar-peptide chains are the backbone of RNA and DNA. In biological organisms, amino acids appear almost exclusively in the left-handed form (L-amino acids) and sugars in the right-handed form (R-sugars). Since the enzymes catalyze reactions, they enforce homochirality on a great variety of other chemicals, including hormones, toxins, fragrances and food flavors. Glycine is achiral, as are some other non-proteinogenic amino acids that are either achiral (such as dimethylglycine) or of the D enantiom
https://en.wikipedia.org/wiki/Charge%20pump
A charge pump is a kind of DC-to-DC converter that uses capacitors for energetic charge storage to raise or lower voltage. Charge-pump circuits are capable of high efficiencies, sometimes as high as 90–95%, while being electrically simple circuits. Description Charge pumps use some form of switching device to control the connection of a supply voltage across a load through a capacitor. In a two stage cycle, in the first stage a capacitor is connected across the supply, charging it to that same voltage. In the second stage the circuit is reconfigured so that the capacitor is in series with the supply and the load. This doubles the voltage across the load - the sum of the original supply and the capacitor voltages. The pulsing nature of the higher voltage switched output is often smoothed by the use of an output capacitor. An external or secondary circuit drives the switching, typically at tens of kilohertz up to several megahertz. The high frequency minimizes the amount of capacitance required, as less charge needs to be stored and dumped in a shorter cycle. Charge pumps can double voltages, triple voltages, halve voltages, invert voltages, fractionally multiply or scale voltages (such as ×, ×, ×, etc.) and generate arbitrary voltages by quickly alternating between modes, depending on the controller and circuit topology. They are commonly used in low-power electronics (such as mobile phones) to raise and lower voltages for different parts of the circuitry - minimizing power consumption by controlling supply voltages carefully. Terminology for PLL The term charge pump is also commonly used in phase-locked loop (PLL) circuits even though there is no pumping action involved unlike in the circuit discussed above. A PLL charge pump is merely a bipolar switched current source. This means that it can output positive and negative current pulses into the loop filter of the PLL. It cannot produce higher or lower voltages than its power and ground supply levels. Applica
https://en.wikipedia.org/wiki/British%20Plant%20Communities
British Plant Communities is a five-volume work, edited by John S. Rodwell and published by Cambridge University Press, which describes the plant communities which comprise the British National Vegetation Classification. Its coverage includes all native vegetation communities and some artificial ones of Great Britain, excluding Northern Ireland. The series is a major contribution to plant conservation in Great Britain, and, as such, covers material appropriate for professionals and amateurs interested in the conservation of native plant communities. Each book begins with an introduction to the techniques used to survey the particular vegetations within its scope, discussing sampling, the type of data collected, organization of the data, and analysing the data. Each community is discussed with an overall emphasis of the ecology of the community, so that users can consider the relationships of various plant communities to each other as a function of climatic or soil conditions, for example. The five volumes are: British Plant Communities Volume 1 – Woodlands and Scrub This volume was first published in 1991 in hardback () and in 1998 in paperback () British Plant Communities Volume 2 – Mires and Heaths This volume was first published in 1991 in hardback () and in 1998 in paperback () British Plant Communities Volume 3 – Grasslands and Montane Communities This volume was first published in 1992 in hardback () and in 1998 in paperback () British Plant Communities Volume 4 – Aquatic Communities, Swamps and Tall-herb Fens This volume was first published in 1995 in hardback () and in 1998 in paperback () British Plant Communities Volume 5 – Maritime Communities and Vegetation of Open Habitats This volume was first published in 2000 in both hardback () and paperback () Errors The following is a list of errors found in the published books: In Volume 1, on page 38–39, the branches leading from couplets 22 and 23 should read W12, not W14 In volume 3, on p
https://en.wikipedia.org/wiki/Lemniscus%20%28anatomy%29
A lemniscus (Greek for ribbon or band) is a bundle of secondary sensory fibers in the brainstem. The medial lemniscus and lateral lemniscus terminate in specific relay nuclei of the diencephalon. The trigeminal lemniscus is sometimes considered as the cephalic part of the medial lemniscus. The spinal lemniscus constitutes the spinothalamic tract.
https://en.wikipedia.org/wiki/Megatherium%20Club
The Megatherium Club was founded by William Stimpson. It was a group of Washington, D.C.-based scientists who were attracted to that city by the Smithsonian Institution's rapidly growing collection, from 1857 to 1866. Many of the members had no formal education, but came by their expertise through extensive direct observation. They spent their weekdays in the rigorous and exacting work of describing and classifying species. But their nights were spent in revelry. They particularly enjoyed partaking in ale, oysters, eggnog, and whatever other fineries their meager budgets could afford. On Sundays, however, they recuperated from the week's stresses and excesses with long nature hikes. The club was named for the Megatherium, an extinct genus of giant ground sloth. The leading spirit of the club was marine biologist William Stimpson, who hosted its earliest meetings in his home. Members dubbed the place "The Stimpsonian." By 1863, though, Stimpson and others had taken up residence in the castle of the actual Smithsonian. Club members were encouraged by Spencer Fullerton Baird, the institution's assistant secretary. And they attracted a variety of learned speakers to their meetings, including Louis Agassiz, John Torrey, and John Cassin. But they were eventually thrown out of their castle suites by the institution's secretary, Joseph Henry, who disapproved of the way members held sack races in the Great Hall and periodically serenaded his daughters. Membership was transitory as individuals undertook independent studies abroad, sometimes for years at a time. Formal meetings ceased about the year 1866 when Stimpson moved to Chicago to oversee that city's Academy of Sciences. Several other "Megatherium Clubs" exist; one formed of overseas Smithsonian researchers, yet another only in fiction, supposedly located in London, United Kingdom. Members Henry Bryant James E. Cooper, owner and manager of the Adam Forepaugh Circus, donated the elephant named Dunk to the
https://en.wikipedia.org/wiki/Cross-presentation
Cross-presentation is the ability of certain professional antigen-presenting cells (mostly dendritic cells) to take up, process and present extracellular antigens with MHC class I molecules to CD8 T cells (cytotoxic T cells). Cross-priming, the result of this process, describes the stimulation of naive cytotoxic CD8+ T cells into activated cytotoxic CD8+ T cells. This process is necessary for immunity against most tumors and against viruses that infect dendritic cells and sabotage their presentation of virus antigens. Cross presentation is also required for the induction of cytotoxic immunity by vaccination with protein antigens, for example, tumour vaccination. Cross-presentation is of particular importance, because it permits the presentation of exogenous antigens, which are normally presented by MHC II on the surface of dendritic cells, to also be presented through the MHC I pathway. The MHC I pathway is normally used to present endogenous antigens that have infected a particular cell. However, cross presenting cells are able to utilize the MHC I pathway in order to remain uninfected, while still triggering an adaptive immune response of activated cytotoxic CD8+ T cells against infected peripheral tissue cells. History The first evidence of cross-presentation was reported in 1976 by Michael J. Bevan after injection of grafted cells carrying foreign minor histocompatibility (MHC) molecules. This resulted in a CD8+ T cell response induced by antigen-presenting cells of the recipient against the foreign MHC cells. Because of this, Bevan implied that these antigen presenting cells must have engulfed and cross presented these foreign MHC cells to host cytotoxic CD8+ cells, thus triggering an adaptive immune response against the grafted tissue. This observation was termed "cross-priming". Later, there had been much controversy about cross-presentation, which now is believed to have been due to particularities and limitations of some experimental systems used. Cross
https://en.wikipedia.org/wiki/Evert%20Willem%20Beth
Evert Willem Beth (7 July 1908 – 12 April 1964) was a Dutch philosopher and logician, whose work principally concerned the foundations of mathematics. He was a member of the Significs Group. Biography Beth was born in Almelo, a small town in the eastern Netherlands. His father had studied mathematics and physics at the University of Amsterdam, where he had been awarded a PhD. Evert Beth studied the same subjects at Utrecht University, but then also studied philosophy and psychology. His 1935 PhD was in philosophy. In 1946, he became professor of logic and the foundations of mathematics in Amsterdam. Apart from two brief interruptions – a stint in 1951 as a research assistant to Alfred Tarski, and in 1957 as a visiting professor at Johns Hopkins University – he held the post in Amsterdam continuously until his death in 1964. His was the first academic post in his country in logic and the foundations of mathematics, and during this time he contributed actively to international cooperation in establishing logic as an academic discipline. In 1953 he became member of the Royal Netherlands Academy of Arts and Sciences. He died in Amsterdam. Contributions to logic Beth definability theorem The Beth definability theorem states that for first-order logic a property (or function or constant) is implicitly definable if and only if it is explicitly definable. Further explanation is provided under Beth definability. Semantic tableaux Beth's most famous contribution to formal logic is semantic tableaux, which are decision procedures for propositional logic and first-order logic. It is a semantic method—like Wittgenstein's truth tables or J. Alan Robinson's resolution—as opposed to the proof of theorems in a formal system, such as the axiomatic systems employed by Frege, Russell and Whitehead, and Hilbert, or even Gentzen's natural deduction. Semantic tableaux are an effective decision procedure for propositional logic, whereas they are only semi-effective for first
https://en.wikipedia.org/wiki/Ludwig%27s%20angina
Ludwig's angina (lat.: Angina ludovici) is a type of severe cellulitis involving the floor of the mouth and is often caused by bacterial sources. Early in the infection, the floor of the mouth raises due to swelling, leading to difficulty swallowing saliva. As a result, patients may present with drooling and difficulty speaking. As the condition worsens, the airway may be compromised and hardening of the spaces on both sides of the tongue may develop. Overall, this condition has a rapid onset over a few hours. The majority of cases follow a dental infection. Other causes include a parapharyngeal abscess, mandibular fracture, cut or piercing inside the mouth, or submandibular salivary stones. The infection spreads through the connective tissue of the floor of the mouth and is normally caused by infectious and invasive organisms such as Streptococcus, Staphylococcus, and Bacteroides. Prevention is by appropriate dental care including management of dental infections. Initial treatment is generally with broad-spectrum antibiotics and corticosteroids. In more advanced cases endotracheal intubation or tracheostomy may be required. With the advent of antibiotics in 1940s, improved oral and dental hygiene, and more aggressive surgical approaches for treatment, the risk of death due to Ludwig's angina has significantly reduced. It is named after a German physician, Wilhelm Frederick von Ludwig, who first described this condition in 1836. Signs and symptoms Ludwig's angina is a form of severe, widespread cellulitis of the floor of the mouth, usually with bilateral involvement. Infection is usually primarily within the submandibular space, and the sublingual and submental spaces can also be involved. It presents with an acute onset and spreads very rapidly, therefore early diagnosis and immediate treatment planning is vital and lifesaving. The external signs may include bilateral lower facial swelling around the jaw and upper neck. Signs inside the mouth may include elev
https://en.wikipedia.org/wiki/Interspersed%20repeat
Interspersed repetitive DNA is found in all eukaryotic genomes. They differ from tandem repeat DNA in that rather than the repeat sequences coming right after one another, they are dispersed throughout the genome and nonadjacent. The sequence that repeats can vary depending on the type of organism, and many other factors. Certain classes of interspersed repeat sequences propagate themselves by RNA mediated transposition; they have been called retrotransposons, and they constitute 25–40% of most mammalian genomes. Some types of interspersed repetitive DNA elements allow new genes to evolve by uncoupling similar DNA sequences from gene conversion during meiosis. Intrachromosomal and interchromosomal gene conversion Gene conversion acts on DNA sequence homology as its substrate. There is no requirement that the sequence homologies lie at the allelic positions on their respective chromosomes or even that the homologies lie on different chromosomes. Gene conversion events can occur between different members of a gene family situated on the same chromosome. When this happens, it is called intrachromosomal gene conversion as distinguished from interchromosomal gene conversion. The effect of homogenizing DNA sequences is the same. Role of interspersed repetitive DNA Repetitive sequences play the role of uncoupling the gene conversion network, thereby allowing new genes to evolve. The shorter Alu or SINE repetitive DNA are specialized for uncoupling intrachromosomal gene conversion while the longer LINE repetitive DNA are specialized for uncoupling interchromosomal gene conversion. In both cases, the interspersed repeats block gene conversion by inserting regions of non-homology within otherwise similar DNA sequences. The homogenizing forces linking DNA sequences are thereby broken and the DNA sequences are free to evolve independently. This leads to the creation of new genes and new species during evolution. By breaking the links that would otherwise overwrite novel DNA s
https://en.wikipedia.org/wiki/Robert%20Ingpen
Robert Roger Ingpen AM, FRSA (born 13 October 1936) is an Australian graphic designer, illustrator, and writer. For his "lasting contribution" as a children's illustrator he received the biennial, international Hans Christian Andersen Medal in 1986. Early life Ingpen was born in Geelong, Victoria, and attended Geelong College to 1957. He graduated with a Diploma of Graphic Art from RMIT in 1958, where he studied with Harold Freedman. Career In 1958, Ingpen was appointed by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) as an artist to interpret and communicate the results of scientific research. From 1968 Ingpen worked as a freelance designer, illustrator and author. He was also a member of a United Nations team in Mexico and Peru until 1975, where he designed pamphlets on fisheries and was involved in "a number of Australian conservation and environmental projects". He left the CSIRO to work full-time as a freelance writer in 1968. Ingpen's interest in conservation issues continued, and he was one of the founding members of the Australian Conservation Foundation. Work Ingpen has written or illustrated more than 100 published books. These include children's picture books and fictional stories for all ages. His nonfiction books mostly relate to history, conservation, environment and health issues. His most frequent collaborator has been the author and editor Michael Page. Ingpen has designed many postage stamps for Australia, as well as the flag and coat of arms for the Northern Territory. Ingpen has created a number of public murals in Geelong, Melbourne, Canberra and the Gold Coast in Queensland. He also has designed bronze statues, which include the Poppykettle Fountain in the Geelong Steam Packet Gardens (currently dry due to drought restrictions) and the bronze doors to the Melbourne Cricket Club. His most recent work is the design and working drawings for a tapestry, which was woven by The Victorian Tapestry Workshop, to celebra
https://en.wikipedia.org/wiki/Ponderomotive%20force
In physics, a ponderomotive force is a nonlinear force that a charged particle experiences in an inhomogeneous oscillating electromagnetic field. It causes the particle to move towards the area of the weaker field strength, rather than oscillating around an initial point as happens in a homogeneous field. This occurs because the particle sees a greater magnitude of force during the half of the oscillation period while it is in the area with the stronger field. The net force during its period in the weaker area in the second half of the oscillation does not offset the net force of the first half, and so over a complete cycle this makes the particle move towards the area of lesser force. The ponderomotive force Fp is expressed by which has units of newtons (in SI units) and where e is the electrical charge of the particle, m is its mass, ω is the angular frequency of oscillation of the field, and E is the amplitude of the electric field. At low enough amplitudes the magnetic field exerts very little force. This equation means that a charged particle in an inhomogeneous oscillating field not only oscillates at the frequency of ω of the field, but is also accelerated by Fp toward the weak field direction. This is a rare case in which the direction of the force does not depend on whether the particle is positively or negatively charged. Etymology The term ponderomotive comes from the Latin ponder- (meaning weight) and the english motive (having to do with motion). Derivation The derivation of the ponderomotive force expression proceeds as follows. Consider a particle under the action of a non-uniform electric field oscillating at frequency in the x-direction. The equation of motion is given by: neglecting the effect of the associated oscillating magnetic field. If the length scale of variation of is large enough, then the particle trajectory can be divided into a slow time motion and a fast time motion: where is the slow drift motion and represents fast osci
https://en.wikipedia.org/wiki/Prodikeys
Prodikeys is a music and computer keyboard combination. It is created by Singaporean audio company Creative Technology. So far there have been 3 different versions of Prodikeys: Creative Prodikeys, Creative Prodikeys DM and Creative Prodikeys PC-MIDI. It has 37 mini-sized music keys under detachable palm cover and comes with Prodikeys software. The MIDI keyboard can also be used as a MIDI controller for third-party MIDI software. It is compatible with Windows XP, 2000, and Linux, but is incompatible with Windows Vista, 7, 8, and Mac OS X. Included Software: EasyNotes Learn to play any song melody on own - from the included song library or downloaded MIDI files of favourite pop tunes from the Internet. EasyNotes supports music format in SEQ and MIDI. FunMix Can create and record own music with pre-arranged mixes and personalize own ring tone or video soundtrack easily. HotKeys Manager It lets customize the keyboard's hotkeys functions for easy access to the software suite. Mini Keyboard Will be able to explore with more than a hundred different instrument sounds - including piano, flute, guitar and drums. Prodikeys Launcher Can use it to launch the software and the Product Tutorial for an interactive demo. Prodikeys DM The Prodikeys DM does not use USB, but rather has one single Mini-DIN connector for the PS/2 port and is therefore detected as a regular typing keyboard. The included Windows software communicates with the keyboard driver in order to send and receive MIDI data over the PS/2 line. This protocol has been partly reverse-engineered, making it possible to use the Prodikeys DM on a regular USB port using an Arduino microcontroller as an adaptor.
https://en.wikipedia.org/wiki/David%20Masser
David William Masser (born 8 November 1948) is Professor Emeritus in the Department of Mathematics and Computer Science at the University of Basel. He is known for his work in transcendental number theory, Diophantine approximation, and Diophantine geometry. With Joseph Oesterlé in 1985, Masser formulated the abc conjecture, which has been called "the most important unsolved problem in Diophantine analysis". Early life and education Masser was born on 8 November 1948 in London, England. He graduated from Trinity College, Cambridge with a B.A. (Hons) in 1970. In 1974, he obtained his M.A. and Ph.D. at the University of Cambridge, with a doctoral thesis under the supervision of Alan Baker titled Elliptic Functions and Transcendence. Career Masser was a Lecturer at the University of Nottingham from 1973 to 1975, before spending the 1975–1976 year as a Research Fellow of Trinity College at the University of Cambridge. He returned to the University of Nottingham to serve as a Lecturer from 1976 to 1979 and then as a Reader from 1979 to 1983. He was a professor at the University of Michigan from 1983 to 1992. He then moved to the Mathematics Institute at the University of Basel and became emeritus there in 2014. Research Masser's research focuses on transcendental number theory, Diophantine approximation, and Diophantine geometry. The abc conjecture originated as the outcome of attempts by Oesterlé and Masser to understand the Szpiro conjecture about elliptic curves. Awards Masser was an invited speaker at the International Congress of Mathematicians in Warsaw in 1983. In 1991, he received the Humboldt Prize. He was elected as a Fellow of the Royal Society in 2005. In 2014, he was elected as a Member of the Academia Europaea. See also Analytic subgroup theorem Bézout's theorem Zilber–Pink conjecture
https://en.wikipedia.org/wiki/MediaPortal
MediaPortal is an open-source media player and digital video recorder software project, often considered an alternative to Windows Media Center. It provides a 10-foot user interface for performing typical PVR/TiVo functionality, including playing, pausing, and recording live TV; playing DVDs, videos, and music; viewing pictures; and other functions. Plugins allow it to perform additional tasks, such as watching online video, listening to music from online services such as Last.fm, and launching other applications such as games. It interfaces with the hardware commonly found in HTPCs, such as TV tuners, infrared receivers, and LCD displays. The MediaPortal source code was initially forked from XBMC (now Kodi), though it has been almost completely re-written since then. MediaPortal is designed specifically for Microsoft Windows, unlike most other open-source media center programs such as MythTV and Kodi, which are usually cross-platform. Features DirectX GUI Video Hardware Acceleration VMR / EVR on Windows Vista / 7 TV / Radio (DVB-S, DVB-S2, DVB-T, DVB-C, Analog television (Common Interface, DVB radio, DVB EPG, Teletext, etc...) IPTV Recording, pause and time shifting of TV and Radio broadcasts Music player Video/DVD player Picture player Internet Streams Integrated Weather Forecasts Built-in RSS reader Metadata web scraping from TheTVDB and The Movie Database Plug ins Skins Graphical User Interfaces Control MediaPortal can be controlled by any input device, that is supported by the Windows Operating System. PC Remote Keyboard / Mouse Gamepad Kinect Wii Remote Android / iOS/ WebOS / S60 handset devices Television MediaPortal uses its own TV-Server to allow to set up one central server with one or more TV cards. All TV related tasks are handled by the server and streamed over the network to one or more clients. Clients can then install the MediaPortal Client software and use the TV-Server to watch live or recorded TV, schedule recordings, view
https://en.wikipedia.org/wiki/Luby%20transform%20code
In computer science, Luby transform codes (LT codes) are the first class of practical fountain codes that are near-optimal erasure correcting codes. They were invented by Michael Luby in 1998 and published in 2002. Like some other fountain codes, LT codes depend on sparse bipartite graphs to trade reception overhead for encoding and decoding speed. The distinguishing characteristic of LT codes is in employing a particularly simple algorithm based on the exclusive or operation () to encode and decode the message. LT codes are rateless because the encoding algorithm can in principle produce an infinite number of message packets (i.e., the percentage of packets that must be received to decode the message can be arbitrarily small). They are erasure correcting codes because they can be used to transmit digital data reliably on an erasure channel. The next generation beyond LT codes are Raptor codes (see for example IETF RFC 5053 or IETF RFC 6330), which have linear time encoding and decoding. Raptor codes are fundamentally based on LT codes, i.e., encoding for Raptor codes uses two encoding stages, where the second stage is LT encoding. Similarly, decoding with Raptor codes primarily relies upon LT decoding, but LT decoding is intermixed with more advanced decoding techniques. The RaptorQ code specified in IETF RFC 6330, which is the most advanced fountain code, has vastly superior decoding probabilities and performance compared to using only an LT code. Why use an LT code? The traditional scheme for transferring data across an erasure channel depends on continuous two-way communication. The sender encodes and sends a packet of information. The receiver attempts to decode the received packet. If it can be decoded, the receiver sends an acknowledgment back to the transmitter. Otherwise, the receiver asks the transmitter to send the packet again. This two-way process continues until all the packets in the message have been transferred successfully. Certain networks,
https://en.wikipedia.org/wiki/Scattering%20channel
In scattering theory, a scattering channel is a quantum state of the colliding system before or after the collision (). The Hilbert space spanned by the states before collision (in states) is equal to the space spanned by the states after collision (out states) which are both Fock spaces if there is a mass gap. This is the reason why the S matrix which maps the in states onto the out states must be unitary. The scattering channel are also called scattering asymptotes. The Møller operators are mapping the scattering channels onto the corresponding states which are solution of the Schrödinger equation taking the interaction Hamiltonian into account. The Møller operators are isometric. See also LSZ formalism Scattering
https://en.wikipedia.org/wiki/Nick%20translation
Nick translation (or head translation), developed in 1977 by Peter Rigby and Paul Berg, is a tagging technique in molecular biology in which DNA Polymerase I is used to replace some of the nucleotides of a DNA sequence with their labeled analogues, creating a tagged DNA sequence which can be used as a probe in fluorescent in situ hybridization (FISH) or blotting techniques. It can also be used for radiolabeling. This process is called nick translation because the DNA to be processed is treated with DNAase to produce single-stranded "nicks". This is followed by replacement in nicked sites by DNA polymerase I, which elongates the 3' hydroxyl terminus, removing nucleotides by 5'-3' exonuclease activity, replacing them with dNTPs. To radioactively label a DNA fragment for use as a probe in blotting procedures, one of the incorporated nucleotides provided in the reaction is radiolabeled in the alpha phosphate position. Similarly, a fluorophore can be attached instead for fluorescent labelling, or an antigen for immunodetection. When DNA polymerase I eventually detaches from the DNA, it leaves another nick in the phosphate backbone. The nick has "translated" some distance depending on the processivity of the polymerase. This nick could be sealed by DNA ligase, or its 3' hydroxyl group could serve as the template for further DNA polymerase I activity. Proprietary enzyme mixes are available commercially to perform all steps in the procedure in a single incubation. Nick translation could cause double-stranded DNA breaks, if DNA polymerase I encounters another nick on the opposite strand, resulting in two shorter fragments. This does not influence the performance of the labelled probe in in-situ hybridization.
https://en.wikipedia.org/wiki/Xinetd
In computer networking, xinetd (Extended Internet Service Daemon) is an open-source super-server daemon which runs on many Unix-like systems, and manages Internet-based connectivity. It offers a more secure alternative to the older inetd ("the Internet daemon"), which most modern Linux distributions have deprecated. Description xinetd listens for incoming requests over a network and launches the appropriate service for that request. Requests are made using port numbers as identifiers and xinetd usually launches another daemon to handle the request. It can be used to start services with both privileged and non-privileged port numbers. xinetd features access control mechanisms such as TCP Wrapper ACLs, extensive logging capabilities, and the ability to make services available based on time. It can place limits on the number of servers that the system can start, and has deployable defense mechanisms to protect against port scanners, among other things. On some implementations of Mac OS X, this daemon starts and maintains various Internet-related services, including FTP and telnet. As an extended form of inetd, it offers enhanced security. It replaced inetd in Mac OS X v10.3, and subsequently launchd replaced it in Mac OS X v10.4. However, Apple has retained inetd for compatibility purposes. Configuration Configuration of xinetd resides in the default configuration file /etc/xinetd.conf, and configuration of the services it supports resides in configuration files stored in the /etc/xinetd.d directory. The configuration for each service usually includes a switch to control whether xinetd should enable or disable the service. An example configuration file for the RFC 868 time server: # default: off # description: An RFC 868 time server. This protocol provides a # site-independent, machine readable date and time. The Time service sends back # to the originating source the time in seconds since midnight on January first # 1900. # This is the tcp version. service
https://en.wikipedia.org/wiki/Small%20subgroup%20confinement%20attack
In cryptography, a subgroup confinement attack, or small subgroup confinement attack, on a cryptographic method that operates in a large finite group is where an attacker attempts to compromise the method by forcing a key to be confined to an unexpectedly small subgroup of the desired group. Several methods have been found to be vulnerable to subgroup confinement attack, including some forms or applications of Diffie–Hellman key exchange and DH-EKE.
https://en.wikipedia.org/wiki/Avoided%20crossing
In quantum physics and quantum chemistry, an avoided crossing (sometimes called intended crossing, non-crossing or anticrossing) is the phenomenon where two eigenvalues of a Hermitian matrix representing a quantum observable and depending on N continuous real parameters cannot become equal in value ("cross") except on a manifold of N-3 dimensions. The phenomenon is also known as the von Neumann–Wigner theorem. In the case of a diatomic molecule (with one parameter, namely the bond length), this means that the eigenvalues cannot cross at all. In the case of a triatomic molecule, this means that the eigenvalues can coincide only at a single point (see conical intersection). This is particularly important in quantum chemistry. In the Born–Oppenheimer approximation, the electronic molecular Hamiltonian is diagonalized on a set of distinct molecular geometries (the obtained eigenvalues are the values of the adiabatic potential energy surfaces). The geometries for which the potential energy surfaces are avoiding to cross are the locus where the Born–Oppenheimer approximation fails. Avoided crossing also occur in the resonance frequencies of undamped mechanical systems, where the stiffness and mass matrices are real symmetric. There the resonance frequencies are the square root of the generalized eigenvalues. In two-state systems Emergence Study of a two-level system is of vital importance in quantum mechanics because it embodies simplification of many of physically realizable systems. The effect of perturbation on a two-state system Hamiltonian is manifested through avoided crossings in the plot of individual energy vs energy difference curve of the eigenstates. The two-state Hamiltonian can be written as The eigenvalues of which are and and the eigenvectors, and . These two eigenvectors designate the two states of the system. If the system is prepared in either of the states it would remain in that state. If happens to be equal to there will be a twofold dege
https://en.wikipedia.org/wiki/Custom%20hardware%20attack
In cryptography, a custom hardware attack uses specifically designed application-specific integrated circuits (ASIC) to decipher encrypted messages. Mounting a cryptographic brute force attack requires a large number of similar computations: typically trying one key, checking if the resulting decryption gives a meaningful answer, and then trying the next key if it does not. Computers can perform these calculations at a rate of millions per second, and thousands of computers can be harnessed together in a distributed computing network. But the number of computations required on average grows exponentially with the size of the key, and for many problems standard computers are not fast enough. On the other hand, many cryptographic algorithms lend themselves to fast implementation in hardware, i.e. networks of logic circuits, also known as gates. Integrated circuits (ICs) are constructed of these gates and often can execute cryptographic algorithms hundreds of times faster than a general purpose computer. Each IC can contain large numbers of gates (hundreds of millions in 2005). Thus, the same decryption circuit, or cell, can be replicated thousands of times on one IC. The communications requirements for these ICs are very simple. Each must be initially loaded with a starting point in the key space and, in some situations, with a comparison test value (see known plaintext attack). Output consists of a signal that the IC has found an answer and the successful key. Since ICs lend themselves to mass production, thousands or even millions of ICs can be applied to a single problem. The ICs themselves can be mounted in printed circuit boards. A standard board design can be used for different problems since the communication requirements for the chips are the same. Wafer-scale integration is another possibility. The primary limitations on this method are the cost of chip design, IC fabrication, floor space, electric power and thermal dissipation. History The earliest c
https://en.wikipedia.org/wiki/Chen%E2%80%93Ho%20encoding
Chen–Ho encoding is a memory-efficient alternate system of binary encoding for decimal digits. The traditional system of binary encoding for decimal digits, known as binary-coded decimal (BCD), uses four bits to encode each digit, resulting in significant wastage of binary data bandwidth (since four bits can store 16 states and are being used to store only 10), even when using packed BCD. The encoding reduces the storage requirements of two decimal digits (100 states) from 8 to 7 bits, and those of three decimal digits (1000 states) from 12 to 10 bits using only simple Boolean transformations avoiding any complex arithmetic operations like a base conversion. History In what appears to have been a multiple discovery, some of the concepts behind what later became known as Chen–Ho encoding were independently developed by Theodore M. Hertz in 1969 and by Tien Chi Chen () (1928–) in 1971. Hertz of Rockwell filed a patent for his encoding in 1969, which was granted in 1971. Chen first discussed his ideas with Irving Tze Ho () (1921–2003) in 1971. Chen and Ho were both working for IBM at the time, albeit in different locations. Chen also consulted with Frank Chin Tung to verify the results of his theories independently. IBM filed a patent in their name in 1973, which was granted in 1974. At least by 1973, Hertz's earlier work must have been known to them, as the patent cites his patent as prior art. With input from Joseph D. Rutledge and John C. McPherson, the final version of the Chen–Ho encoding was circulated inside IBM in 1974 and published in 1975 in the journal Communications of the ACM. This version included several refinements, primarily related to the application of the encoding system. It constitutes a Huffman-like prefix code. The encoding was referred to as Chen and Ho's scheme in 1975, Chen's encoding in 1982 and became known as Chen–Ho encoding or Chen–Ho algorithm since 2000. After having filed a patent for it in 2001, Michael F. Cowlishaw published a
https://en.wikipedia.org/wiki/Reduced%20product
In model theory, a branch of mathematical logic, and in algebra, the reduced product is a construction that generalizes both direct product and ultraproduct. Let {Si | i ∈ I} be a nonempty family of structures of the same signature σ indexed by a set I, and let U be a proper filter on I. The domain of the reduced product is the quotient of the Cartesian product by a certain equivalence relation ~: two elements (ai) and (bi) of the Cartesian product are equivalent if If U only contains I as an element, the equivalence relation is trivial, and the reduced product is just the direct product. If U is an ultrafilter, the reduced product is an ultraproduct. Operations from σ are interpreted on the reduced product by applying the operation pointwise. Relations are interpreted by For example, if each structure is a vector space, then the reduced product is a vector space with addition defined as (a + b)i = ai + bi and multiplication by a scalar c as (ca)i = c ai.
https://en.wikipedia.org/wiki/Ipchains
Linux IP Firewalling Chains, normally called ipchains, is free software to control the packet filter or firewall capabilities in the 2.2 series of Linux kernels. It superseded ipfirewall (managed by ipfwadm command), but was replaced by iptables in the 2.4 series. Unlike iptables, ipchains is stateless. It is a rewrite of Linux's previous IPv4 firewall, ipfirewall. This newer ipchains was required to manage the packet filter in Linux kernels starting with version 2.1.102 (which was a 2.2 development release). Patches are also available to add ipchains to 2.0 and earlier 2.1 series kernels. Improvements include larger maxima for packet counting, filtering for fragmented packets and a wider range of protocols, and the ability to match packets based on the inverse of a rule. The ipchains suite also included some shell scripts for easier maintenance and to emulate the behavior of the old ipfwadm command. The ipchains software was superseded by the iptables system in Linux kernel 2.4 and above, which was in turn superseded by the nftables system in 2014.
https://en.wikipedia.org/wiki/Loop%20space
In topology, a branch of mathematics, the loop space ΩX of a pointed topological space X is the space of (based) loops in X, i.e. continuous pointed maps from the pointed circle S1 to X, equipped with the compact-open topology. Two loops can be multiplied by concatenation. With this operation, the loop space is an A∞-space. That is, the multiplication is homotopy-coherently associative. The set of path components of ΩX, i.e. the set of based-homotopy equivalence classes of based loops in X, is a group, the fundamental group π1(X). The iterated loop spaces of X are formed by applying Ω a number of times. There is an analogous construction for topological spaces without basepoint. The free loop space of a topological space X is the space of maps from the circle S1 to X with the compact-open topology. The free loop space of X is often denoted by . As a functor, the free loop space construction is right adjoint to cartesian product with the circle, while the loop space construction is right adjoint to the reduced suspension. This adjunction accounts for much of the importance of loop spaces in stable homotopy theory. (A related phenomenon in computer science is currying, where the cartesian product is adjoint to the hom functor.) Informally this is referred to as Eckmann–Hilton duality. Eckmann–Hilton duality The loop space is dual to the suspension of the same space; this duality is sometimes called Eckmann–Hilton duality. The basic observation is that where is the set of homotopy classes of maps , and is the suspension of A, and denotes the natural homeomorphism. This homeomorphism is essentially that of currying, modulo the quotients needed to convert the products to reduced products. In general, does not have a group structure for arbitrary spaces and . However, it can be shown that and do have natural group structures when and are pointed, and the aforementioned isomorphism is of those groups. Thus, setting (the sphere) gives the relationship
https://en.wikipedia.org/wiki/Lotus%20effect
The lotus effect refers to self-cleaning properties that are a result of ultrahydrophobicity as exhibited by the leaves of Nelumbo, the lotus flower. Dirt particles are picked up by water droplets due to the micro- and nanoscopic architecture on the surface, which minimizes the droplet's adhesion to that surface. Ultrahydrophobicity and self-cleaning properties are also found in other plants, such as Tropaeolum (nasturtium), Opuntia (prickly pear), Alchemilla, cane, and also on the wings of certain insects. The phenomenon of ultrahydrophobicity was first studied by Dettre and Johnson in 1964 using rough hydrophobic surfaces. Their work developed a theoretical model based on experiments with glass beads coated with paraffin or PTFE telomer. The self-cleaning property of ultrahydrophobic micro-nanostructured surfaces was studied by Wilhelm Barthlott and Ehler in 1977, who described such self-cleaning and ultrahydrophobic properties for the first time as the "lotus effect"; perfluoroalkyl and perfluoropolyether ultrahydrophobic materials were developed by Brown in 1986 for handling chemical and biological fluids. Other biotechnical applications have emerged since the 1990s. Functional principle The high surface tension of water causes droplets to assume a nearly spherical shape, since a sphere has minimal surface area, and this shape therefore minimizes the solid-liquid surface energy. On contact of liquid with a surface, adhesion forces result in wetting of the surface. Either complete or incomplete wetting may occur depending on the structure of the surface and the fluid tension of the droplet. The cause of self-cleaning properties is the hydrophobic water-repellent double structure of the surface. This enables the contact area and the adhesion force between surface and droplet to be significantly reduced, resulting in a self-cleaning process. This hierarchical double structure is formed out of a characteristic epidermis (its outermost layer called the cuticle) an
https://en.wikipedia.org/wiki/Sylvester%20matrix
In mathematics, a Sylvester matrix is a matrix associated to two univariate polynomials with coefficients in a field or a commutative ring. The entries of the Sylvester matrix of two polynomials are coefficients of the polynomials. The determinant of the Sylvester matrix of two polynomials is their resultant, which is zero when the two polynomials have a common root (in case of coefficients in a field) or a non-constant common divisor (in case of coefficients in an integral domain). Sylvester matrices are named after James Joseph Sylvester. Definition Formally, let p and q be two nonzero polynomials, respectively of degree m and n. Thus: The Sylvester matrix associated to p and q is then the matrix constructed as follows: if n > 0, the first row is: the second row is the first row, shifted one column to the right; the first element of the row is zero. the following n − 2 rows are obtained the same way, shifting the coefficients one column to the right each time and setting the other entries in the row to be 0. if m > 0 the (n + 1)th row is: the following rows are obtained the same way as before. Thus, if m = 4 and n = 3, the matrix is: If one of the degrees is zero (that is, the corresponding polynomial is a nonzero constant polynomial), then there are zero rows consisting of coefficients of the other polynomial, and the Sylvester matrix is a diagonal matrix of dimension the degree of the non-constant polynomial, with the all diagonal coefficients equal to the constant polynomial. If m = n = 0, then the Sylvester matrix is the empty matrix with zero rows and zero columns. A variant The above defined Sylvester matrix appears in a Sylvester paper of 1840. In a paper of 1853, Sylvester introduced the following matrix, which is, up to a permutation of the rows, the Sylvester matrix of p and q, which are both considered as having degree max(m, n). This is thus a -matrix containing pairs of rows. Assuming it is obtained as follows: the first pair is:
https://en.wikipedia.org/wiki/Hashiwokakero
Hashiwokakero (橋をかけろ Hashi o kakero; lit. "build bridges!") is a type of logic puzzle published by Nikoli. It has also been published in English under the name Bridges or Chopsticks (based on a mistranslation: the hashi of the title, 橋, means bridge; hashi written with another character, 箸, means chopsticks). It has also appeared in The Times under the name Hashi. In France, Denmark, the Netherlands, and Belgium it is published under the name Ai-Ki-Ai. Rules Hashiwokakero is played on a rectangular grid with no standard size, although the grid itself is not usually drawn. Some cells start out with (usually encircled) numbers from 1 to 8 inclusive; these are the "islands". The rest of the cells are empty. The goal is to connect all of the islands by drawing a series of bridges between the islands. The bridges must follow certain criteria: They must begin and end at distinct islands, travelling a straight line in between. They must not cross any other bridges or islands. They may only run orthogonally (i.e. they may not run diagonally). At most two bridges connect a pair of islands. The number of bridges connected to each island must match the number on that island. The bridges must connect the islands into a single connected group. Solution methods Solving a Hashiwokakero puzzle is a matter of procedural force: having determined where a bridge must be placed, placing it there can eliminate other possible places for bridges, forcing the placement of another bridge, and so on. An island showing '3' in a corner, '5' along the outside edge, or '7' anywhere must have at least one bridge radiating from it in each valid direction, for if one direction did not have a bridge, even if all other directions sported two bridges, not enough will have been placed. A '4' in a corner, '6' along the border, or '8' anywhere must have two bridges in each direction. This can be generalized as added bridges obstruct routes: a '3' that can only be travelled from vertic
https://en.wikipedia.org/wiki/EPPO%20Code
An EPPO code, formerly known as a Bayer code, is an encoded identifier that is used by the European and Mediterranean Plant Protection Organization (EPPO), in a system designed to uniquely identify organisms – namely plants, pests and pathogens – that are important to agriculture and crop protection. EPPO codes are a core component of a database of names, both scientific and vernacular. Although originally started by the Bayer Corporation, the official list of codes is now maintained by EPPO. EPPO code database All codes and their associated names are included in a database (EPPO Global Database). In total, there are over 93,500 species listed in the EPPO database, including: 55,000 species of plants (e.g. cultivated, wild plants and weeds) 27,000 species of animals (e.g. insects, mites, nematodes, rodents), biocontrol agents 11,500 microorganism species (e.g. bacteria, fungi, viruses, viroids and virus-like) Plants are identified by a five-letter code, other organisms by a six-letter one. In many cases the codes are mnemonic abbreviations of the scientific name of the organism, derived from the first three or four letters of the genus and the first two letters of the species. For example, corn, or maize (Zea mays), was assigned the code "ZEAMA"; the code for potato late blight (Phytophthora infestans) is "PHYTIN". The unique and constant code for each organism provides a shorthand method of recording species. The EPPO code avoids many of the problems caused by revisions to scientific names and taxonomy which often result in different synonyms being in use for the same species. When the taxonomy changes, the EPPO code stays the same. The EPPO system is used by governmental organizations, conservation agencies, and researchers. Example External links EPPO Global Database (lookup EPPO codes) EPPO Data Services (download EPPO codes)
https://en.wikipedia.org/wiki/Attacking%20Faulty%20Reasoning
Attacking Faulty Reasoning is a textbook on logical fallacies by T. Edward Damer that has been used for many years in a number of college courses on logic, critical thinking, argumentation, and philosophy. It explains 60 of the most commonly committed fallacies. Each of the fallacies is concisely defined and illustrated with several relevant examples. For each fallacy, the text gives suggestions about how to address or to "attack" the fallacy when it is encountered. The organization of the fallacies comes from the author’s own fallacy theory, which defines a fallacy as a violation of one of the five criteria of a good argument: the argument must be structurally well-formed; the premises must be relevant; the premises must be acceptable; the premises must be sufficient in number, weight, and kind; there must be an effective rebuttal of challenges to the argument. Each fallacy falls into at least one of Damer's five fallacy categories, which derive from the above criteria. The five fallacy categories Fallacies that violate the structural criterion. The structural criterion requires that one who argues for or against a position should use an argument that meets the fundamental structural requirements of a well-formed argument, using premises that are compatible with one another, that do not contradict the conclusion, that do not assume the truth of the conclusion, and that are not involved in any faulty deductive inference. Fallacies such as begging the question, denying the antecedent, or undistributed middle violate this criterion. Fallacies that violate the relevance criterion. The relevance criterion requires that one who presents an argument for or against a position should attempt to set forth only reasons that are directly related to the merit of the position at issue. Fallacies such as appeal to tradition, appeal to force, or genetic fallacy fail to meet the argumentative demands of relevance. Fallacies that violate the acceptability criterion. The a
https://en.wikipedia.org/wiki/Refugium%20%28population%20biology%29
In biology, a refugium (plural: refugia) is a location which supports an isolated or relict population of a once more widespread species. This isolation (allopatry) can be due to climatic changes, geography, or human activities such as deforestation and overhunting. Present examples of refugial animal species are the mountain gorilla, isolated to specific mountains in central Africa, and the Australian sea lion, isolated to specific breeding beaches along the south-west coast of Australia, due to humans taking so many of their number as game. This resulting isolation, in many cases, can be seen as only a temporary state; however, some refugia may be longstanding, thereby having many endemic species, not found elsewhere, which survive as relict populations. The Indo-Pacific Warm Pool has been proposed to be a longstanding refugium, based on the discovery of the "living fossil" of a marine dinoflagellate called Dapsilidinium pastielsii, currently found in the Indo-Pacific Warm Pool only. For plants, anthropogenic climate change propels scientific interest in identifying refugial species that were isolated into small or disjunct ranges during glacial episodes of the Pleistocene, yet whose ability to expand their ranges during the warmth of interglacial periods (such as the Holocene) was apparently limited or precluded by topographic, streamflow, or habitat barriers—or by the extinction of coevolved animal dispersers. The concern is that ongoing warming trends will expose them to extirpation or extinction in the decades ahead. In anthropology, refugia often refers specifically to Last Glacial Maximum refugia, where some ancestral human populations may have been forced back to glacial refugia (similar small isolated pockets on the face of the continental ice sheets) during the last glacial period. Going from west to east, suggested examples include the Franco-Cantabrian region (in northern Iberia), the Italian and Balkan peninsulas, the Ukrainian LGM refuge, and the
https://en.wikipedia.org/wiki/Release%20engineering
Release engineering, frequently abbreviated as RE or as the clipped compound Releng, is a sub-discipline in software engineering concerned with the compilation, assembly, and delivery of source code into finished products or other software components. Associated with the software release life cycle, it was said by Boris Debic of Google Inc. that release engineering is to software engineering as manufacturing is to an industrial process: Release engineering is the difference between manufacturing software in small teams or startups and manufacturing software in an industrial way that is repeatable, gives predictable results, and scales well. These industrial style practices not only contribute to the growth of a company but also are key factors in enabling growth. The importance of release engineering in enabling growth of a technology company has been repeatedly argued by John O'Duinn and Bram Adams. While it is not the goal of release engineering to encumber software development with a process overlay, it is often seen as a sign of organizational and developmental maturity. Modern release engineering is concerned with several aspects of software production: Identifiability Being able to identify all of the source, tools, environment, and other components that make up a particular release. Reproducibility The ability to integrate source, third party components, data, and deployment externals of a software system in order to guarantee operational stability. Consistency The mission to provide a stable framework for development, deployment, audit and accountability for software components. Agility The ongoing research into what are the repercussions of modern software engineering practices on the productivity in the software cycle, e.g. continuous integration and push on green initiatives. Release engineering is often the integration hub for more complex software development teams, sitting at the cross between development, product management, quality assurance
https://en.wikipedia.org/wiki/Method%20of%20moments%20%28statistics%29
In statistics, the method of moments is a method of estimation of population parameters. The same principle is used to derive higher moments like skewness and kurtosis. It starts by expressing the population moments (i.e., the expected values of powers of the random variable under consideration) as functions of the parameters of interest. Those expressions are then set equal to the sample moments. The number of such equations is the same as the number of parameters to be estimated. Those equations are then solved for the parameters of interest. The solutions are estimates of those parameters. The method of moments was introduced by Pafnuty Chebyshev in 1887 in the proof of the central limit theorem. The idea of matching empirical moments of a distribution to the population moments dates back at least to Pearson. Method Suppose that the problem is to estimate unknown parameters characterizing the distribution of the random variable . Suppose the first moments of the true distribution (the "population moments") can be expressed as functions of the s: Suppose a sample of size is drawn, resulting in the values . For , let be the j-th sample moment, an estimate of . The method of moments estimator for denoted by is defined to be the solution (if one exists) to the equations: The method described here for single random variables generalizes in an obvious manner to multiple random variables leading to multiple choices for moments to be used. Different choices generally lead to different solutions [5], [6]. Advantages and disadvantages The method of moments is fairly simple and yields consistent estimators (under very weak assumptions), though these estimators are often biased. It is an alternative to the method of maximum likelihood. However, in some cases the likelihood equations may be intractable without computers, whereas the method-of-moments estimators can be computed much more quickly and easily. Due to easy computability, method-of-momen
https://en.wikipedia.org/wiki/Glucomannan
Glucomannan is a water-soluble polysaccharide that is considered a dietary fiber. It is a hemicellulose component in the cell walls of some plant species. Glucomannan is a food additive used as an emulsifier and thickener. It is a major source of mannan oligosaccharide (MOS) found in nature, the other being galactomannan, which is insoluble. Products containing glucomannan, under a variety of brand names, are marketed as dietary supplements with claims they can relieve constipation and help lower cholesterol levels. Since 2010 they are legally marketed in Europe as helping with weight loss for people who are overweight and eating a diet with restricted calories, but there was no good evidence that glucomannan helped weight loss. Glucomannan lowers LDL cholesterol by 10 percent. Supplements containing glucomannans pose a risk for choking and bowel obstruction if they are not taken with sufficient water. Other adverse effects include diarrhea, belching, and bloating; in one study people taking glucomannans had higher triglyceride levels. Glucomannans are also used to supplement animal feed for farmed animals, to cause the animals gain weight more quickly. Chemistry Glucomannan is mainly a straight-chain polymer, with a small amount of branching. The component sugars are β-(1→4)-linked D-mannose and D-glucose in a ratio of 1.6:1. The degree of branching is about 8% through β-(1→6)-glucosyl linkages. Glucomannan with α-(1→6)-linked galactose units in side branches is called galactoglucomannan. Biological function In the yeast cell wall, mannan oligosaccharides are present in complex molecules that are linked to the protein moiety. There are two main locations of mannan oligosaccharides in the surface area of Saccharomyces cerevisiae cell wall. They can be attached to the cell wall proteins as part of –O and –N glycosyl groups and also constitute elements of large α-D-mannanose polysaccharides (α-D-Mannans), which are built of α-(1,2)- and α-(1,3)- D-mannose b
https://en.wikipedia.org/wiki/List%20of%20U.S.%20cities%20with%20diacritics
This is a list of U.S. cities whose official names have diacritics. Alaska Utqiaġvik American Samoa Āfono Ālega 'Āmanave Āmouli Aūa Fagasā Faleāsao Lumā Tāfuna California La Cañada Flintridge, Los Angeles County Los Baños, Merced County Piñon Hills, San Bernardino County San José, Santa Clara County Colorado Cañon, Conejos County Cañon City Piñon, Montrose County Piñon, Pueblo County Piñon Acres, La Plata County Guam Hagåtña Hagåtña Heights Hawaii City names in Hawaii often use the ʻokina, not to be confused with the apostrophe. Āhuimanu Āinaloa Hanapēpē Haikū-Pauwela Hālawa Hāliimaile Hāmoa, Maui County Hāna Hāōū Hāwī Hīlea, Hawaii County Hōlualoa Hōnaunau-Nāpōopoo Honokōhau, Maui County Hoōpūloa, Hawaii County Kāanapali Kaimū Kākio, Maui County Kalāheo Kamalō, Maui County Kāneohe Kaupō Kaūpūlehu Keālia Kēōkea, Hawaii County Kēōkea, Maui County Kīhei Kīholo, Hawaii County Kīlauea Kīpahulu Kīpū, Maui County Kōloa Kūkaiau, Hawaii County Kūkiʻo, Hawaii County Lāie Lānai City Laupāhoehoe Lāwai Līhue Māalaea Māili Mākaha Mākaha Valley Mākena Mānā, Hawaii County Mokulēia Mōpua, Maui County Mūolea, Maui County Nāālehu Nāhiku Nānākuli Nānāwale Estates Nāpili-Honokōwai Nīnole, Hāmākua District, Hawaii County Nīnole, Kaū District, Hawaii County Ōmao Ōmaopio, Maui County Ōōkala Pāauhau Pāhala Pāhoa Pāia Pākalā Village Pālehua, Honolulu County Pāpā Bay Estates, Hawaii County Pāpaaloa Pāpaikou Poipū Puaākala, Hawaii County Pūālaa, Hawaii County Puakō Pūkoo, Maui County Pūlehu, Maui County Pūpūkea Puunēnē Wahiawā Wahīlauhue, Maui County Waikāne Waikapū Waimānalo Waimānalo Beach Waiōhinu Waipāhoehoe, Hawaii County Welokā, Hawaii County Louisiana Pointe à la Hache West Pointe à la Hache Minnesota Arnesén, Lake of the Woods County Lindström Missouri O'Fallon, Missouri New Mexico Cañada de los Alamos Cañon, Mora County Cañon, Sandoval County Cañonc
https://en.wikipedia.org/wiki/Laetiporus%20sulphureus
Laetiporus sulphureus is a species of bracket fungus (fungi that grow on trees) found in Europe and North America. Its common names are crab-of-the-woods, sulphur polypore, sulphur shelf, and chicken-of-the-woods. Its fruit bodies grow as striking golden-yellow shelf-like structures on tree trunks and branches. Old fruitbodies fade to pale beige or pale grey. The undersurface of the fruit body is made up of tubelike pores rather than gills. Laetiporus sulphureus is a saprophyte and occasionally a weak parasite, causing brown cubical rot in the heartwood of trees on which it grows. Unlike many bracket fungi, it is edible when young, although adverse reactions have been reported. Taxonomy and phylogenetics Laetiporus sulphureus was first described as Boletus sulphureus by French mycologist Pierre Bulliard in 1789. It has had many synonyms and was finally given its current name in 1920 by American mycologist William Murrill. Laetiporus means "with bright pores" and sulphureus means "the colour of sulphur". Investigations in North America have shown that there are several similar species within what has been considered L. sulphureus and that the true L. sulphureus may be restricted to regions east of the Rocky Mountains. Phylogenetic analyses of ITS and nuclear large subunit and mitochondrial small subunit rDNA sequences from North American collections have delineated five distinct clades within the core Laetiporus clade. Sulphureus clade I contains white-pored L. sulphureus isolates, while Sulphureus clade II contains yellow-pored L. sulphureus isolates. Description The fruiting body emerges directly from the trunk of a tree and is initially knob-shaped, but soon expands to fan-shaped shelves, typically growing in overlapping tiers. It is sulphur-yellow to bright orange in color and has a suedelike texture. Old fruitbodies fade to tan or whitish. Each shelf may be anywhere from across and up to thick. The fertile surface is sulphur-yellow with small pores or tub
https://en.wikipedia.org/wiki/American%20Institute%20of%20Physics
The American Institute of Physics (AIP) promotes science and the profession of physics, publishes physics journals, and produces publications for scientific and engineering societies. The AIP is made up of various member societies. Its corporate headquarters are at the American Center for Physics in College Park, Maryland, but the institute also has offices in Melville, New York, and Beijing. Historical overview The AIP was founded in 1931 as a response to lack of funding for the sciences during the Great Depression. It formally incorporated in 1932 consisting of five original "member societies", and a total of four thousand members. A new set of member societies was added beginning in the mid-1960s. As soon as the AIP was established it began publishing scientific journals. Member societies Affiliated societies List of publications The AIP has a subsidiary called AIP Publishing (wholly owned non-profit) dedicated to scholarly publishing by the AIP and its member societies, as well on behalf of other partners. AIP Style Just as the American Chemical Society has its own style called ACS Style, AIP has its own citation style called AIP Style which is commonly used in physics. See also Institute of Physics PACS Science Writing Award SPIE Joan Warnow-Blewett
https://en.wikipedia.org/wiki/Calvatia%20gigantea
Calvatia gigantea, commonly known as the giant puffball, is a puffball mushroom commonly found in meadows, fields, and deciduous forests usually in late summer and autumn. It is found in temperate areas throughout the world. Description Most giant puffballs grow to be , sometimes to be in diameter; although occasionally some can reach diameters up to and weights of . The inside of mature giant puffballs is greenish brown, whereas the interior of immature puffballs is white. The fruiting body of a puffball mushroom will develop within the period of a few weeks and soon begin to decompose and rot, at which point it is dangerous to eat. Unlike most mushrooms, all the spores of the giant puffball are created inside the fruiting body; large specimens can easily contain several trillion spores. Spores are yellowish, smooth, and 3–5 μm in size. Similar fungi Giant puffballs resemble the earthball (Scleroderma citrinum). The latter are distinguished by a much firmer, elastic fruiting body, and having an interior that becomes dark purplish-black with white reticulation early in development. Scleroderma citrinum is poisonous and may cause mild intoxication. Taxonomy The classification of this species has been revised in recent years, as the formerly recognized class Gasteromycetes, which included all puffballs, has been found to be polyphyletic. Some authors place the giant puffball and other members of genus Calvatia in order Agaricales. Also, the species has in the past been placed in two other genera, Lycoperdon and Langermannia. However, the current view is that the giant puffball is Calvatia. Conservation status The giant puffball is widespread and common in the UK. It is protected in parts of Poland and is of conservation concern in Norway. Uses Cooking The large white mushrooms are edible when young, as are all true puffballs, but can cause digestive upset if the spores have begun to form—as indicated by the color of the flesh being not pure white (first ye
https://en.wikipedia.org/wiki/Interval%20arithmetic
[[File:Set of curves Outer approximation.png|345px|thumb|right|Tolerance function (turquoise) and interval-valued approximation (red)]] Interval arithmetic (also known as interval mathematics; interval analysis or interval computation) is a mathematical technique used to mitigate rounding and measurement errors in mathematical computation by computing function bounds. Numerical methods involving interval arithmetic can guarantee relatively reliable and mathematically correct results. Instead of representing a value as a single number, interval arithmetic or interval mathematics represents each value as a range of possibilities. Mathematically, instead of working with an uncertain real-valued variable , interval arithmetic works with an interval that defines the range of values that can have. In other words, any value of the variable lies in the closed interval between and . A function , when applied to , produces an interval which includes all the possible values for for all . Interval arithmetic is suitable for a variety of purposes; the most common use is in scientific works, particularly when the calculations are handled by software, where it is used to keep track of rounding errors in calculations and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find guaranteed solutions to equations (such as differential equations) and optimization problems. Introduction The main objective of interval arithmetic is to provide a simple way of calculating upper and lower bounds of a function's range in one or more variables. These endpoints are not necessarily the true supremum or infimum of a range since the precise calculation of those values can be difficult or impossible; the bounds only need to contain the function's range as a subset. This treatment is typically limite
https://en.wikipedia.org/wiki/Our%20World%20%281967%20TV%20program%29
Our World was the first live multinational multi-satellite television production. National broadcasters from fourteen countries around the world, coordinated by the European Broadcasting Union (EBU), participated in the program. The two-hour event, which was broadcast on Sunday 25 June 1967 in twenty-four countries, had an estimated audience of 400 to 700 million people, the largest television audience up to that date. Four communications satellites were used to provide a worldwide coverage, which was a technological milestone in television broadcasting. Creative artists, including opera singer Heather Harper, film director Franco Zeffirelli, conductor Leonard Bernstein, sculptor Alexander Calder and painter Joan Miró were invited to perform or appear in separate live segments, each of them produced by one of the participant broadcasters. The most famous segment is one from the United Kingdom starring the Beatles performing their song "All You Need Is Love" for the first time. Planning The project was conceived by British Broadcasting Corporation (BBC) producer Aubrey Singer. Due to the magnitude of the production, its coordination was transferred to the European Broadcasting Union (EBU), with Singer as the project's head. Two communications satellites in geosynchronous orbit over the Atlantic Ocean –Intelsat I (known as "Early Bird") and Intelsat II F-3 ("Canary Bird")–, two over the Pacific Ocean –Intelsat II F-2 ("Lani Bird") and NASA's ATS-1– and nine ground stations, in addition to EBU's Eurovision point-to-point communications network, all monitored by technical and production teams in forty-three control rooms, were used to link North America, Europe, Tunisia, Japan and Australia in real time. The master control room for the broadcast was the TC1 studio control room at the BBC Television Centre in London. Contributions from North America, Japan and Australia were routed to London by the CBS Switching Center in New York –which was rented for the purpose–,
https://en.wikipedia.org/wiki/Dander
Dander is material shed from the body of humans and other animals that have fur, hair, or feathers. The term is similar to dandruff, when an excess of flakes becomes visible. Skin flakes that come off the main body of an animal are dander, while the flakes of skin called dandruff come from the scalp and are composed of epithelial skin cells. The surface layer of mammalian skin is called the stratum corneum, which is shed as part of normal skin replacement. Dander is microscopic, and can be transported through the air in house dust, where it forms the diet of the dust mites. Through the air, dander can enter the mucous membranes in the nose and lungs, causing allergies in susceptible individuals, largely through the mechanism of allergy to proteins in the bodies of the dust mites that live on dander. Dander builds up in carpets and in mattresses and pillows, so smooth surfaces predispose to an environment where levels of dander can be controlled more easily. More pet dander is sloughed off in older animals than in younger animals. Dander build up can be a cause of allergies, such as allergic rhinitis, in humans. Dr. Paivi Salo, an allergy expert at the National Institute of Health, states that "airborne allergies affect approximately 10-30% of adults and 40% of children." Damp dusting and vacuum cleaners with sealed bodies and fitted with HEPA filters reduce re-distribution of the dander dust, with associated dust mites, into the air. has it that dander is a dialect synonym of dandruff, possibly from Yorkshire in England. See also Allergy to cats Allergy to dogs Dandruff Powder down
https://en.wikipedia.org/wiki/B-tagging
b-tagging is a method of jet flavor tagging used in modern particle physics experiments. It is the identification (or "tagging") of jets originating from bottom quarks (or b quarks, hence the name). Importance b-tagging is important because: The physics of bottom quarks is quite interesting; in particular, it sheds light on CP violation. Some important high-mass particles (both recently discovered and hypothetical) decay into bottom quarks. Top quarks very nearly always do so, and the Higgs boson is expected to decay into bottom quarks more than any other particle given its mass has been observed to be about 125 GeV. Identifying bottom quarks helps to identify the decays of these particles. Methods The methods for b-tagging are based on the unique features of b-jets. These include: Hadrons containing bottom quarks have sufficient lifetime that they travel some distance before decaying. On the other hand, their lifetimes are not so high as those of light quark hadrons, so they decay inside the detector rather than escape. The advent of precision silicon detectors within particle detectors has made it possible to identify particles that originate from a place different to where the bottom quark was formed (e.g. the beam–beam collision point in a particle accelerator), and thus indicating the likely presence of a b-jet. The bottom quark is much more massive than anything it decays into. Thus its decay products tend to have higher transverse momentum (momentum perpendicular to the original direction of the bottom quark, and therefore of the b-jet). This causes b-jets to be wider, have higher multiplicities (numbers of constituent particles) and invariant masses, and also to contain low-energy leptons with momentum perpendicular to the jet. These two features can be measured, and jets that have them are more likely to be b-jets. Opposite-side algorithms have been used at the LHCb to tag the flavor in pairs of b quarks using the decay products of B-hadrons to in
https://en.wikipedia.org/wiki/Patch%20Tuesday
Patch Tuesday (also known as Update Tuesday) is an unofficial term used to refer to when Microsoft, Adobe, Oracle and others regularly release software patches for their software products. It is widely referred to in this way by the industry. Microsoft formalized Patch Tuesday in October 2003. Patch Tuesday is known within Microsoft also as the "B" release, to distinguish it from the "C" and "D" releases that occur in the third and fourth weeks of the month, respectively. Patch Tuesday occurs on the second Tuesday of each month in North America. Critical security updates are occasionally released outside of the normal Patch Tuesday cycle; these are known as "Out-of-band" releases. As far as the integrated Windows Update (WU) function is concerned, Patch Tuesday begins at 10:00 a.m. Pacific Time. Vulnerability information is immediately available in the Security Update Guide. The updates show up in Download Center before they are added to WU, and the KB articles are unlocked later. Daily updates consist of malware database refreshes for Microsoft Defender and Microsoft Security Essentials, these updates are not part of the normal Patch Tuesday release cycle. History Starting with Windows 98, Microsoft included Windows Update, which once installed and executed would check for patches to Windows and its components, which Microsoft would release intermittently. With the release of Microsoft Update, this system also checks for updates for other Microsoft products, such as Microsoft Office, Visual Studio and SQL Server. Earlier versions of Windows Update suffered from two problems: Less experienced users often remained unaware of Windows Update and did not install it. Microsoft countered this issue in Windows ME with the Automatic Updates component, which displayed availability of updates, with the option of automatic installation. Customers with multiple copies of Windows, such as corporate users, not only had to update every Windows deployment in the company but
https://en.wikipedia.org/wiki/Generalized%20processor%20sharing
Generalized processor sharing (GPS) is an ideal scheduling algorithm for process schedulers and network schedulers. It is related to the fair-queuing principle which groups packets into classes and shares the service capacity between them. GPS shares this capacity according to some fixed weights. In process scheduling, GPS is "an idealized scheduling algorithm that achieves perfect fairness. All practical schedulers approximate GPS and use it as a reference to measure fairness." Generalized processor sharing assumes that traffic is fluid (infinitesimal packet sizes), and can be arbitrarily split. There are several service disciplines which track the performance of GPS quite closely such as weighted fair queuing (WFQ), also known as packet-by-packet generalized processor sharing (PGPS). Justification In a network such as the internet, different application types require different levels of performance. For example, email is a genuinely store and forward kind of application, but videoconferencing isn't since it requires low latency. When packets are queued up on one end of a congested link, the node usually has some freedom in deciding the order in which it should send the queued packets. One example ordering is simply first-come, first-served, which works fine if the sizes of the queues are small, but can result in problems if there are latency-sensitive packets being blocked by packets from bursty, higher bandwidth applications. Details In GPS, a scheduler handling flows (also called "classes", or "sessions") is configured with one weight for each flow. Then, the GPS ensures that, considering one flow , and some time interval such that the flow is continuously backlogged on this interval (i.e. the queue is never empty), then, for any other flow , the following relation holds where denotes the amount of bits of the flow made output on interval . Then, it can be proved that each flow will receive at least a rate where is the rate of the server. Thi
https://en.wikipedia.org/wiki/Domesticated%20hedgehog
The most common species of domesticated hedgehog is the four-toed hedgehog (Atelerix albiventris). The Algerian hedgehog (Atelerix algirus) is a separate species of hedgehog. The domesticated hedgehog kept as a pet is typically the African pygmy hedgehog (Atelerix albiventris). Other species kept as pets include the long-eared hedgehog (Hemiechinus auritus) and the Indian long-eared hedgehog (Hemiechinus collaris). Roman domesticated hedgehog The Romans domesticated a relative of the Algerian hedgehog in the 4th century BCE, to use for meat and quills as well as pets. The Romans also used the quill-covered hedgehog skins to clean their shawls, making them important to commerce, which resulted in the Roman senate regulating the trade in hedgehog skins. The quills were used in the training of other animals, such as keeping a calf from suckling after it had been weaned. Hedgehog quills remained in use for card paper and dissection pins long after the Romans had ceased to actively breed and raise hedgehogs. Modern domestication In the early 1980s, after a hiatus of about 1600 years, hedgehog domestication again became popular. Some U.S. states, however, ban them, or require a license to own one. Since domestication restarted, several new colors of hedgehogs have been cultivated or become common, including albino and pinto hedgehogs. "Pinto" is a color pattern, rather than a color: A total lack of color on the quills and skin beneath, in distinct patches. Currently, the species most common among domestic hedgehogs are African, from warm climates (above 22 °C, 72 °F). They do not hibernate in the wild, and if one of these African hedgehogs begins hibernation in response to lowered body temperature, the result can be its death. The process is easily reversed by warming, if caught within a few days of onset. Legality Because a hedgehog is commonly kept in a cage or similar enclosure, it is allowed in some residences where cats and dogs are not allowed. It is ille
https://en.wikipedia.org/wiki/The%20g%20Factor%3A%20The%20Science%20of%20Mental%20Ability
The g Factor: The Science of Mental Ability is a 1998 book by psychologist Arthur Jensen about the general factor of human mental ability, or g. Summary The book traces the origins of the idea of individual differences in general mental ability to 19th century researchers Herbert Spencer and Francis Galton. Charles Spearman is credited for inventing factor analysis in the early 20th century, which enabled statistical testing of the hypothesis that general mental ability is required in all mental efforts. Spearman, gave the name g to the common factor underlying all mental tasks. He suggested that g reflected individual differences in "mental energy", and hoped that future research would uncover the biological basis of this energy. The book argues that because it is difficult to arrive at a consensual scientific definition of the term intelligence, scientists should dispense with the term and focus on specific abilities and their covariances. It also argues that mental abilities are best conceptualized as a three-level hierarchy, with a large number of narrow abilities at the base, a relatively small number of broad factors at the intermediate level, and a single general factor, g, at the apex. The g factor can be derived from a correlation matrix of mental ability tests by many different methods of factor analysis. A g factor always emerges provided that the test battery is sufficiently large and diverse. The only exception is when one uses orthogonal rotation which precludes the appearance of a g factor. Jensen argues that orthogonal rotation is not appropriate for substantially positively correlated variables such as mental abilities. The g factor has been found to be largely invariant across different factor analytic methods and in different racial and cultural groups. Jensen argues that g is normally distributed in any population. He also contends that g cannot be described in terms of the information content or item characteristics of tests, and likens it to
https://en.wikipedia.org/wiki/King%27s%20Knight
is a scrolling shooter video game developed and published by Square for the Nintendo Entertainment System and MSX. The game was released in Japan on September 18, 1986 and in North America in 1989. It was later re-released for the Wii's Virtual Console in Japan on November 27, 2007 and in North America on March 24, 2008. This would be followed by a release on the Virtual Console in Japan on February 4, 2015, for 3DS and July 6, 2016, for Wii U. The game became Square's first North American release under their Redmond subsidiary Squaresoft, and their first release as an independent company. The 1986 release's title screen credits Workss for programming. King's Knight saw a second release in 1987 on the NEC PC-8801mkII SR and the Sharp X1. These versions of the game were retitled King's Knight Special and released exclusively in Japan. It was the first game designed by Hironobu Sakaguchi for the Famicom. Nobuo Uematsu provided the musical score for King's Knight. It was Uematsu's third work of video game music composition. Plot King's Knight follows a basic storyline similar to many NES-era role-playing video games: Princess Claire of Olthea has been kidnapped in the Kingdom of Izander, and the player must choose one of the four heroes (the knight/warrior "Ray Jack", the wizard "Kaliva", the monster/gigant "Barusa" and the (kid) thief "Toby") to train and set forth to attack Gargatua Castle, defeat the evil dragon Tolfida and rescue the princess. Gameplay King's Knight is a vertically scrolling shooter, where the main objective is to dodge or destroy all onscreen enemies and obstacles. Various items, however, add depth to the game. As any character, the player can collect various power-ups to increase a character's level (maximum of twenty levels per character): as many as seven Jump Increases, seven Speed Increases, three Weapon Increases, and three Shield Increases. There are also Life Ups, which are collected to increase the character's life meter. There are
https://en.wikipedia.org/wiki/Folliculogenesis
Although the process is similar in many animals, this article will deal exclusively with human folliculogenesis. In biology, folliculogenesis is the maturation of the ovarian follicle, a densely packed shell of somatic cells that contains an immature oocyte. Folliculogenesis describes the progression of a number of small primordial follicles into large preovulatory follicles that occurs in part during the menstrual cycle. Contrary to male spermatogenesis, which can last indefinitely, folliculogenesis ends when the remaining follicles in the ovaries are incapable of responding to the hormonal cues that previously recruited some follicles to mature. This depletion in follicle supply signals the beginning of menopause. Overview The primary role of the follicle is oocyte support. From the whole pool of follicles a woman is born with, only 0.1% of them will rise ovulation, whereas 99.9% will break down (in a process called follicular atresia). From birth, the ovaries of the human female contain a number of immature, primordial follicles. These follicles each contain a similarly immature primary oocyte. At puberty, clutches of follicles begin folliculogenesis, entering a growth pattern that ends in ovulation (the process where the oocyte leaves the follicle) or in atresia (death of the follicle's granulosa cells). During follicular development, primordial follicles undergo a series of critical changes in character, both histologically and hormonally. First they change into primary follicles and later into secondary follicles. The follicles then transition to tertiary, or antral, follicles. At this stage in development, they become dependent on hormones, particularly FSH which causes a substantial increase in their growth rate. The late tertiary or pre-ovulatory follicle ruptures and discharges the oocyte (that has become a secondary oocyte), ending folliculogenesis. Follicle ‘selection’ is the process by which a single ‘dominant’ follicle is chosen from the recruited
https://en.wikipedia.org/wiki/Jet%20%28particle%20physics%29
A jet is a narrow cone of hadrons and other particles produced by the hadronization of a quark or gluon in a particle physics or heavy ion experiment. Particles carrying a color charge, such as quarks, cannot exist in free form because of quantum chromodynamics (QCD) confinement which only allows for colorless states. When an object containing color charge fragments, each fragment carries away some of the color charge. In order to obey confinement, these fragments create other colored objects around them to form colorless objects. The ensemble of these objects is called a jet, since the fragments all tend to travel in the same direction, forming a narrow "jet" of particles. Jets are measured in particle detectors and studied in order to determine the properties of the original quarks. A jet definition includes a jet algorithm and a recombination scheme. The former defines how some inputs, e.g. particles or detector objects, are grouped into jets, while the latter specifies how a momentum is assigned to a jet. For example, jets can be characterized by the thrust. The jet direction (jet axis) can be defined as the thrust axis. In particle physics experiments, jets are usually built from clusters of energy depositions in the detector calorimeter. When studying simulated processes, the calorimeter jets can be reconstructed based on a simulated detector response. However, in simulated samples, jets can also be reconstructed directly from stable particles emerging from fragmentation processes. Particle-level jets are often referred to as truth-jets. A good jet algorithm usually allows for obtaining similar sets of jets at different levels in the event evolution. Typical jet reconstruction algorithms are, e.g., the anti-kT algorithm, kT algorithm, cone algorithm. A typical recombination scheme is the E-scheme, or 4-vector scheme, in which the 4-vector of a jet is defined as the sum of 4-vectors of all its constituents. In relativistic heavy ion physics, jets are importan
https://en.wikipedia.org/wiki/Induced%20gamma%20emission
In physics, induced gamma emission (IGE) refers to the process of fluorescent emission of gamma rays from excited nuclei, usually involving a specific nuclear isomer. It is analogous to conventional fluorescence, which is defined as the emission of a photon (unit of light) by an excited electron in an atom or molecule. In the case of IGE, nuclear isomers can store significant amounts of excitation energy for times long enough for them to serve as nuclear fluorescent materials. There are over 800 known nuclear isomers but almost all are too intrinsically radioactive to be considered for applications. there were two proposed nuclear isomers that appeared to be physically capable of IGE fluorescence in safe arrangements: tantalum-180m and hafnium-178m2. History Induced gamma emission is an example of interdisciplinary research bordering on both nuclear physics and quantum electronics. Viewed as a nuclear reaction it would belong to a class in which only photons were involved in creating and destroying states of nuclear excitation. It is a class usually overlooked in traditional discussions. In 1939 Pontecorvo and Lazard reported the first example of this type of reaction. Indium was the target and in modern terminology describing nuclear reactions it would be written 115In(γ,γ')115mIn. The product nuclide carries an "m" to denote that it has a long enough half life (4.5 h in this case) to qualify as being a nuclear isomer. That is what made the experiment possible in 1939 because the researchers had hours to remove the products from the irradiating environment and then to study them in a more appropriate location. With projectile photons, momentum and energy can be conserved only if the incident photon, X-ray or gamma, has precisely the energy corresponding to the difference in energy between the initial state of the target nucleus and some excited state that is not too different in terms of quantum properties such as spin. There is no threshold behavior and the inc
https://en.wikipedia.org/wiki/Straw%20chamber
A straw chamber is a type of Gaseous ionization detector. It is a long tube with a wire down the center and a gas which becomes ionized when a particle passes through. A potential difference is maintained between the wire and the walls of the tube, so that once the gas is ionized electrons move in one direction and ions in the other. This produces a current which indicates that a particle has passed through the chamber. Many straws together can be used to track particles in a straw tracker. A straw tracker is a type of particle detector which uses many straw chambers to track the path of a particle. The path of a particle is determined by the best fit to all the straws with hits. Since the time for a particular straw to produce a signal is proportional to the distance of the particle's closest approach to that chamber's wire, if a particle on a predictable path (e.g. a helix in a magnetic field) passes through many straws, the path of the particle can be determined more precisely than the size of any particular straw. Specific uses There are about 298,000 drift tubes (straws) in the Transition Radiation Tracker (TRT) of the ATLAS_experiment at the Large Hadron Collider.
https://en.wikipedia.org/wiki/Particle%20identification
Particle identification is the process of using information left by a particle passing through a particle detector to identify the type of particle. Particle identification reduces backgrounds and improves measurement resolutions, and is essential to many analyses at particle detectors. Charged particles Charged particles have been identified using a variety of techniques. All methods rely on a measurement of the momentum in a tracking chamber combined with a measurement of the velocity to determine the charged particle mass, and therefore its identity. Specific ionization A charged particle loses energy in matter by ionization at a rate determined in part by its velocity. The energy loss per unit distance is typically called dE/dx. The energy loss is measured either in dedicated detectors, or in tracking chambers designed to also measure energy loss. The energy lost in a thin layer of material is subject to large fluctuations, and therefore accurate dE/dx determination requires a large number of measurements. Individual measurements in the low and high energy tails are excluded. Time of flight Time of flight detectors determine charged particle velocity by measuring the time required to travel from the interaction point to the time of flight detector, or between two detectors. The ability to distinguish particle types diminishes as the particle velocity approaches its maximum allowed value, speed of light, and thus is efficient only for particles with a small Lorentz factor. Cherenkov detectors Cherenkov radiation is emitted by a charged particle when it passes through a material with a speed greater than c/n, where n is the index of refraction of the material. The angle of the photons with respect to the charged particle direction depends on velocity. A number of Cherenkov detector geometries have been used. Photons Photons are identified because they leave all their energy in a detector's electromagnetic calorimeter, but do not appear in the trackin
https://en.wikipedia.org/wiki/Epitope%20mapping
In immunology, epitope mapping is the process of experimentally identifying the binding site, or epitope, of an antibody on its target antigen (usually, on a protein). Identification and characterization of antibody binding sites aid in the discovery and development of new therapeutics, vaccines, and diagnostics. Epitope characterization can also help elucidate the binding mechanism of an antibody and can strengthen intellectual property (patent) protection. Experimental epitope mapping data can be incorporated into robust algorithms to facilitate in silico prediction of B-cell epitopes based on sequence and/or structural data. Epitopes are generally divided into two classes: linear and conformational/discontinuous. Linear epitopes are formed by a continuous sequence of amino acids in a protein. Conformational epitopes epitopes are formed by amino acids that are nearby in the folded 3D structure but distant in the protein sequence. Note that conformational epitopes can include some linear segments. B-cell epitope mapping studies suggest that most interactions between antigens and antibodies, particularly autoantibodies and protective antibodies (e.g., in vaccines), rely on binding to discontinuous epitopes. Importance for antibody characterization By providing information on mechanism of action, epitope mapping is a critical component in therapeutic monoclonal antibody (mAb) development. Epitope mapping can reveal how a mAb exerts its functional effects - for instance, by blocking the binding of a ligand or by trapping a protein in a non-functional state. Many therapeutic mAbs target conformational epitopes that are only present when the protein is in its native (properly folded) state, which can make epitope mapping challenging. Epitope mapping has been crucial to the development of vaccines against prevalent or deadly viral pathogens, such as chikungunya, dengue, Ebola, and Zika viruses, by determining the antigenic elements (epitopes) that confer long-lasting
https://en.wikipedia.org/wiki/Cosmic%20dust
Cosmic dustalso called extraterrestrial dust, space dust, or star dustis dust that occurs in outer space or has fallen onto Earth. Most cosmic dust particles measure between a few molecules and , such as micrometeoroids. Larger particles are called meteoroids. Cosmic dust can be further distinguished by its astronomical location: intergalactic dust, interstellar dust, interplanetary dust (as in the zodiacal cloud), and circumplanetary dust (as in a planetary ring). There are several methods to obtain space dust measurement. In the Solar System, interplanetary dust causes the zodiacal light. Solar System dust includes comet dust, planetary dust (like from Mars), asteroidal dust, dust from the Kuiper belt, and interstellar dust passing through the Solar System. Thousands of tons of cosmic dust are estimated to reach Earth's surface every year, with most grains having a mass between 10−16 kg (0.1 pg) and 10−4 kg (0.1 g). The density of the dust cloud through which the Earth is traveling is approximately 10−6 dust grains/m3. Cosmic dust contains some complex organic compounds (amorphous organic solids with a mixed aromatic–aliphatic structure) that could be created naturally, and rapidly, by stars. A smaller fraction of dust in space is "stardust" consisting of larger refractory minerals that condensed as matter left by stars. Interstellar dust particles were collected by the Stardust spacecraft and samples were returned to Earth in 2006. Study and importance Cosmic dust was once solely an annoyance to astronomers, as it obscures objects they wished to observe. When infrared astronomy began, the dust particles were observed to be significant and vital components of astrophysical processes. Their analysis can reveal information about phenomena like the formation of the Solar System. For example, cosmic dust can drive the mass loss when a star is nearing the end of its life, play a part in the early stages of star formation, and form planets. In the Solar System,
https://en.wikipedia.org/wiki/Cabrito
Cabrito () is the name in both Spanish and Portuguese for roast goat kid in various Iberian and Latin American cuisines. Argentina Cabrito is also a regional specialty of Córdoba Province in Argentina, especially the town of Quilino, which has a festival in its honour. "Chivito" differs from "cabrito" in that chivito is a slightly older animal with less tender meat. The chivito has already begun to eat solid foods, whereas the cabrito is still a suckling. Mexico It is a regional specialty of the city of Monterrey, Mexico, and the surrounding state of Nuevo Leon, based on the Jewish cuisine of the founders of the city. In northern Mexico, cabrito is cooked in a variety of ways: Cabrito al pastor: The best-known and perhaps most popular form. The whole carcass is opened flat and impaled on a spit. The spit is then placed next to a bed of glowing embers and roasted slowly in the open air without seasonings other than the light scent it will absorb from the slow-burning charcoal. Cabrito al horno (oven-roasted cabrito): Toasted slowly in an oven at low temperatures. A number of variants of this preparation have emerged, including some very elaborate processes that involve applying seasonings and covering the cooking meat at specific times to produce a tasty and juicy treat. Cabrito en salsa (cabrito in sauce): The animal is cut into portions, browned in oil and braised in a tomato-based sauce with onions, garlic and green chilies, and other seasonings until tender. Cabrito en sangre (cabrito in blood), sometimes fritada de cabrito: A less common preparation in which the blood of the animal is collected when it is slaughtered and it becomes the basis for the sauce that the goat is braised in, along with the animal's liver, kidneys, and heart, and other seasonings. The end product is tender cabrito in a rich, very dark sauce. Portugal and Brazil In Portuguese, the name cabrito is used for a goat kid (not just roasted) in Northeast Region, Brazil, especially in th
https://en.wikipedia.org/wiki/Koopmans%27%20theorem
Koopmans' theorem states that in closed-shell Hartree–Fock theory (HF), the first ionization energy of a molecular system is equal to the negative of the orbital energy of the highest occupied molecular orbital (HOMO). This theorem is named after Tjalling Koopmans, who published this result in 1934. Koopmans' theorem is exact in the context of restricted Hartree–Fock theory if it is assumed that the orbitals of the ion are identical to those of the neutral molecule (the frozen orbital approximation). Ionization energies calculated this way are in qualitative agreement with experiment – the first ionization energy of small molecules is often calculated with an error of less than two electron volts. Therefore, the validity of Koopmans' theorem is intimately tied to the accuracy of the underlying Hartree–Fock wavefunction. The two main sources of error are orbital relaxation, which refers to the changes in the Fock operator and Hartree–Fock orbitals when changing the number of electrons in the system, and electron correlation, referring to the validity of representing the entire many-body wavefunction using the Hartree–Fock wavefunction, i.e. a single Slater determinant composed of orbitals that are the eigenfunctions of the corresponding self-consistent Fock operator. Empirical comparisons with experimental values and higher-quality ab initio calculations suggest that in many cases, but not all, the energetic corrections due to relaxation effects nearly cancel the corrections due to electron correlation. A similar theorem (Janak's theorem) exists in density functional theory (DFT) for relating the exact first vertical ionization energy and electron affinity to the HOMO and LUMO energies, although both the derivation and the precise statement differ from that of Koopmans' theorem. Ionization energies calculated from DFT orbital energies are usually poorer than those of Koopmans' theorem, with errors much larger than two electron volts possible depending on the excha
https://en.wikipedia.org/wiki/Conical%20intersection
In quantum chemistry, a conical intersection of two or more potential energy surfaces is the set of molecular geometry points where the potential energy surfaces are degenerate (intersect) and the non-adiabatic couplings between these states are non-vanishing. In the vicinity of conical intersections, the Born–Oppenheimer approximation breaks down and the coupling between electronic and nuclear motion becomes important, allowing non-adiabatic processes to take place. The location and characterization of conical intersections are therefore essential to the understanding of a wide range of important phenomena governed by non-adiabatic events, such as photoisomerization, photosynthesis, vision and the photostability of DNA. The conical intersection involving the ground electronic state potential energy surface of the C6H3F3+ molecular ion is discussed in connection with the Jahn–Teller effect in Section 13.4.2 on pages 380-388 of the textbook by Bunker and Jensen. Conical intersections are also called molecular funnels or diabolic points as they have become an established paradigm for understanding reaction mechanisms in photochemistry as important as transitions states in thermal chemistry. This comes from the very important role they play in non-radiative de-excitation transitions from excited electronic states to the ground electronic state of molecules. For example, the stability of DNA with respect to the UV irradiation is due to such conical intersection. The molecular wave packet excited to some electronic excited state by the UV photon follows the slope of the potential energy surface and reaches the conical intersection from above. At this point the very large vibronic coupling induces a non-radiative transition (surface-hopping) which leads the molecule back to its electronic ground state. The singularity of vibronic coupling at conical intersections is responsible for the existence of Geometric phase, which was discovered by Longuet-Higgins in this contex
https://en.wikipedia.org/wiki/Journal%20of%20Statistical%20Software
The Journal of Statistical Software is a peer-reviewed open-access scientific journal that publishes papers related to statistical software. The Journal of Statistical Software was founded in 1996 by Jan de Leeuw of the Department of Statistics at the University of California, Los Angeles. Its current editors-in-chief are Achim Zeileis, Bettina Grün, Edzer Pebesma, and Torsten Hothorn. It is published by the Foundation for Open Access Statistics. The journal charges no author fees or subscription fees. The journal publishes peer-reviewed articles about statistical software, together with the source code. It also publishes reviews of statistical software and books (by invitation only). Articles are licensed under the Creative Commons Attribution License, while the source codes distributed with articles are licensed under the GNU General Public License. Articles are often about free statistical software and coverage includes packages for the R programming language. Abstracting and indexing The Journal of Statistical Software is indexed in the Current Index to Statistics and the Science Citation Index Expanded. Its 2018 Impact Factor in Journal Citation Reports is 11.655. The journal was named a Rising Star by Science Watch in 2011.
https://en.wikipedia.org/wiki/Supersecondary%20structure
A supersecondary structure is a compact three-dimensional protein structure of several adjacent elements of a secondary structure that is smaller than a protein domain or a subunit. Supersecondary structures can act as nucleations in the process of protein folding. Examples Helix supersecondary structures Helix hairpin A helix hairpin, also known as an alpha-alpha hairpin, is composed of two antiparallel alpha helices connected by a loop of two or more residues. True to its name, it resembles a hairpin. A longer loop has a greater number of possible conformations. If short strands connect the helices, then the individual helices will pack together through their hydrophobic residues. The function of a helix hairpin is unknown; however, a four helix bundle is composed of two helix hairpins, which have important ligand binding sites. Helix corner A helix corner, also called an alpha-alpha corner, has two alpha helices almost at right angles to each other connected by a short 'loop'. This loop is formed from a hydrophobic residue. The function of a helix corner is unknown. Helix-loop-helix The helix-loop-helix structure has two helices connected by a 'loop'. These are fairly common and usually bind ligands. For example, calcium binds with the carboxyl groups of the side chains within the loop region between the helices. Helix-turn-helix The helix-turn-helix motif is important for DNA binding and is therefore in many DNA binding proteins. Beta sheet supersecondary structures Beta hairpin A beta hairpin is a common supersecondary motif composed of two anti-parallel beta strands connected by a loop. The structure resembles a hairpin and is often found in globular proteins. The loop between the beta strands can range anywhere from 2 to 16 residues. However, most loops contain less than seven residues. Residues in beta hairpins with loops of 2, 3, or 4 residues have distinct conformations. However, a wide range of conformations can be seen in longer loops,
https://en.wikipedia.org/wiki/Airport%20problem
In mathematics and especially game theory, the airport problem is a type of fair division problem in which it is decided how to distribute the cost of an airport runway among different players who need runways of different lengths. The problem was introduced by S. C. Littlechild and G. Owen in 1973. Their proposed solution is: Divide the cost of providing the minimum level of required facility for the smallest type of aircraft equally among the number of landings of all aircraft Divide the incremental cost of providing the minimum level of required facility for the second smallest type of aircraft (above the cost of the smallest type) equally among the number of landings of all but the smallest type of aircraft. Continue thus until finally the incremental cost of the largest type of aircraft is divided equally among the number of landings made by the largest aircraft type. The authors note that the resulting set of landing charges is the Shapley value for an appropriately defined game. Introduction In an airport problem there is a finite population N and a nonnegative function C: N-R. For technical reasons it is assumed that the population is taken from the set of the natural numbers: players are identified with their 'ranking number'. The cost function satisfies the inequality C(i) <C(j)whenever i <j. It is typical for airport problems that the cost C(i)is assumed to be a part of the cost C(j) if i<j, i.e. a coalition S is confronted with costs c(S): =MAX C(i). In this way an airport problem generates an airport game (N,c). As the value of each one-person coalition (i) equals C(i), we can rediscover the airport problem from the airport game theory. Nash Equilibrium Nash equilibrium, also known as non-cooperative game equilibrium, is an essential term in game theory described by John Nash in 1951. In a game process, regardless of the opponent's strategy choice, one of the parties will choose a certain strategy, which is called dominant strategy. If any par
https://en.wikipedia.org/wiki/Obstruction%20theory
In mathematics, obstruction theory is a name given to two different mathematical theories, both of which yield cohomological invariants. In the original work of Stiefel and Whitney, characteristic classes were defined as obstructions to the existence of certain fields of linear independent vectors. Obstruction theory turns out to be an application of cohomology theory to the problem of constructing a cross-section of a bundle. In homotopy theory The older meaning for obstruction theory in homotopy theory relates to the procedure, inductive with respect to dimension, for extending a continuous mapping defined on a simplicial complex, or CW complex. It is traditionally called Eilenberg obstruction theory, after Samuel Eilenberg. It involves cohomology groups with coefficients in homotopy groups to define obstructions to extensions. For example, with a mapping from a simplicial complex X to another, Y, defined initially on the 0-skeleton of X (the vertices of X), an extension to the 1-skeleton will be possible whenever the image of the 0-skeleton will belong to the same path-connected component of Y. Extending from the 1-skeleton to the 2-skeleton means defining the mapping on each solid triangle from X, given the mapping already defined on its boundary edges. Likewise, then extending the mapping to the 3-skeleton involves extending the mapping to each solid 3-simplex of X, given the mapping already defined on its boundary. At some point, say extending the mapping from the (n-1)-skeleton of X to the n-skeleton of X, this procedure might be impossible. In that case, one can assign to each n-simplex the homotopy class of the mapping already defined on its boundary, (at least one of which will be non-zero). These assignments define an n-cochain with coefficients in . Amazingly, this cochain turns out to be a cocycle and so defines a cohomology class in the nth cohomology group of X with coefficients in . When this cohomology class is equal to 0, it turns out that the
https://en.wikipedia.org/wiki/Cluster%20impact%20fusion
Cluster Impact Fusion is a suggested method of producing practical fusion power using small clusters of heavy water molecules directly accelerated into a titanium-deuteride target. Calculations suggested that such a system enhanced the cross section by many orders of magnitude. It is a particular implementation of the larger beam-target fusion concept. The idea was first reported by researchers at Brookhaven in 1989. Intrigued by recent reports of cold fusion, they attempted to study potential causes for the effect by accelerating tiny droplets of heavy water, about 25 to 1300 D2O molecules each, into a target at about 220 eV. To their surprise they immediately saw fusion effects, at a rate that was many times what any of them could explain via conventional theory. The experiment was fairly simple in concept but required an appropriate accelerator, so it was some time before other labs were able to repeat the experiments. One of the first was the University of Washington, who reported a null result in 1991. Further experiments and a review from MIT in 1992 solved the mystery: the fusion products were the results of contamination, which could be eliminated by filtering with a magnet. The Brookhaven experimenters tried this and the effect disappeared. Cluster impact fusion references end abruptly at that point. See also Impact fusion, which fires Macron (physics) or other projectiles into fuel to compress and heat it
https://en.wikipedia.org/wiki/Otic%20ganglion
The otic ganglion is a small parasympathetic ganglion located immediately below the foramen ovale in the infratemporal fossa and on the medial surface of the mandibular nerve. It is functionally associated with the glossopharyngeal nerve and innervates the parotid gland for salivation. It is one of four parasympathetic ganglia of the head and neck. The others are the ciliary ganglion, the submandibular ganglion and the pterygopalatine ganglion. Structure and relations The otic ganglion is a small (2–3 mm), oval shaped, flattened parasympathetic ganglion of a reddish-grey color, located immediately below the foramen ovale in the infratemporal fossa and on the medial surface of the mandibular nerve. It is in relation, laterally, with the trunk of the mandibular nerve at the point where the motor and sensory roots join; medially, with the cartilaginous part of the auditory tube, and the origin of the tensor veli palatini; posteriorly, with the middle meningeal artery. It surrounds the origin of the nerve to the medial pterygoid. Connections The preganglionic parasympathetic fibres originate in the inferior salivatory nucleus of the glossopharyngeal nerve. They leave the glossopharyngeal nerve by its tympanic branch and then pass via the tympanic plexus and the lesser petrosal nerve to the otic ganglion. Here, the fibers synapse and the postganglionic fibers pass by communicating branches to the auriculotemporal nerve, which conveys them to the parotid gland. They produce vasodilator and secretomotor effects. Its sympathetic root is derived from the plexus on the middle meningeal artery. It contains post-ganglionic fibers arising in the superior cervical ganglion. The fibers pass through the ganglion without relay and reach the parotid gland via the auriculotemporal nerve. They are vasomotor in function. The sensory root comes from the auriculotemporal nerve and is sensory to the parotid gland. The motor fibers supplying the medial pterygoid and the tensor ve
https://en.wikipedia.org/wiki/Pterygopalatine%20ganglion
The pterygopalatine ganglion (aka Meckel's ganglion, nasal ganglion, or sphenopalatine ganglion) is a parasympathetic ganglion in the pterygopalatine fossa. It is one of four parasympathetic ganglia of the head and neck, (the others being the submandibular, otic, and ciliary ganglion). It is largely innervated by the greater petrosal nerve (a branch of the facial nerve). Its postsynaptic axons project to the lacrimal glands and nasal mucosa. The flow of blood to the nasal mucosa, in particular the venous plexus of the conchae, is regulated by the pterygopalatine ganglion and heats or cools the air in the nose. Structure The pterygopalatine ganglion (of Meckel), the largest of the parasympathetic ganglia associated with the branches of the maxillary nerve, is deeply placed in the pterygopalatine fossa, close to the sphenopalatine foramen. It is triangular or heart-shaped, of a reddish-gray color, and is situated just below the maxillary nerve as it crosses the fossa. The pterygopalatine ganglion supplies the lacrimal gland, paranasal sinuses, glands of the mucosa of the nasal cavity and pharynx, the gingiva, and the mucous membrane and glands of the hard palate. It communicates anteriorly with the nasopalatine nerve. Roots It receives a sensory, a parasympathetic, and a sympathetic root. Sensory root Its sensory root is derived from two sphenopalatine branches of the maxillary nerve; their fibers, for the most part, pass directly into the palatine nerves; a few, however, enter the ganglion, constituting its sensory root. Parasympathetic root Its parasympathetic root is derived from the nervus intermedius (a part of the facial nerve) through the greater petrosal nerve. In the pterygopalatine ganglion, the preganglionic parasympathetic fibers from the greater petrosal branch of the facial nerve synapse with neurons whose postganglionic axons, vasodilator, and secretory fibers are distributed with the deep branches of the trigeminal nerve to the mucous membrane
https://en.wikipedia.org/wiki/World%20Wide%20Port%20Name
In computing, a World Wide Port Name, WWPN, or WWpN, is a World Wide Name assigned to a port in a Fibre Channel fabric. Used on storage area networks, it performs a function equivalent to the MAC address in Ethernet protocol, as it is supposed to be a unique identifier in the network. A WWPN is a World Wide Port Name; a unique identifier for each Fibre Channel port presented to a Storage Area Network (SAN). Each port on a Storage Device has a unique and persistent WWPN. A World Wide Node Name, WWNN, or WWnN, is a World Wide Name assigned to a node (an endpoint, a device) in a Fibre Channel fabric. It is valid for the same WWNN to be seen on many different ports (different addresses) on the network, identifying the ports as multiple network interfaces of a single network node. External links Locating the WWPN for a Linux host Fibre Channel Identifiers
https://en.wikipedia.org/wiki/Chaining
Chaining is a type of intervention that aims to create associations between behaviors in a behavior chain. A behavior chain is a sequence of behaviors that happen in a particular order where the outcome of the previous step in the chain serves as a signal to begin the next step in the chain. In terms of behavior analysis, a behavior chain is begun with a discriminative stimulus (SD) which sets the occasion for a behavior, the outcome of that behavior serves as a reinforcer for completing the previous step and as another SD to complete the next step. This sequence repeats itself until the last step in the chain is completed and a terminal reinforcer (the outcome of a behavior chain, i.e. with brushing one's teeth the terminal reinforcer is having clean teeth) is achieved. For example, the chain in brushing one's teeth starts with seeing the toothbrush, this sets the occasion to get toothpaste, which then leads to putting it on one's brush, brushing the sides and front of mouth, spitting out the toothpaste, rinsing one's mouth, and finally putting away one's toothbrush. To outline behavior chains, as done in the example, a task analysis is used. Chaining is used to teach complex behaviors made of behavior chains that the current learner does not have in their repertoire. Various steps of the chain can be in the learner’s repertoire, but the steps the learner doesn’t know how to do have to be in the category of can’t do instead of won’t do (issue with knowing the skill not an issue of compliance). There are three different types of chaining that can be used and they are forward chaining, backward chaining, and total task chaining (not to be confused with a task analysis). Forward chaining Forward chaining is a procedure where a behavior chain is learned and completed by teaching the steps in chronological order using prompting and fading. The teacher teaches the first step by presenting a distinctive stimulus to the learner. Once they complete the first step in the
https://en.wikipedia.org/wiki/Rubbermaid
Rubbermaid is an American manufacturer and distributor of household items. A subsidiary of Newell Brands, it is best known for producing food storage containers and trash cans. It also produces sheds, step stools, closets and shelving, laundry baskets, bins, air fresheners and other household items. History Rubbermaid was founded in 1920 in Wooster, Ohio as the Wooster Rubber Company by nine businessmen. Originally, Wooster Rubber Company manufactured toy balloons. In 1933, James R. Caldwell and his wife received a patent for their blue rubber dustpan. They called their line of rubber kitchen products Rubbermaid. In 1934 Horatio Ebert saw Rubbermaid products at a New England department store, and believed such products could help his struggling Wooster Rubber. He engineered a merger of the two enterprises in July 1934. Still named the Wooster Company, the new group began to produce rubber household products under the Rubbermaid brand name. In 1984, Rubbermaid acquired Little Tikes, a toy maker. In 1985, Rubbermaid acquired competitor Gott Corporation. In 1996, Rubbermaid acquired Graco baby products. In 1999, Rubbermaid was purchased by Newell for $6 billion. Then Newell changed its name to Newell Rubbermaid. Newell Rubbermaid changed its name again to the present-day Newell Brands in 2016 as part of a takeover of Jarden in another merger. In 2003, the company announced its move out of Wooster to Atlanta, Georgia; 850 manufacturing and warehouse jobs would be eliminated, and 409 office jobs would move to other locations. A Rubbermaid distribution center remained at the former headquarters for some time, until it was recently purchased by GOJO Industries, Inc. On November 16, 2004, Rubbermaid was used as a prime example in the PBS Frontline documentary "Is Walmart Good for America?" Timeline 1920 Wooster Rubber is launched. 1927 Horatio Ebert and Errett Grable took over managing the company from the original 9 founders. 1933 Rubbermaid is launched. 1933 Firs
https://en.wikipedia.org/wiki/Fabric%20OS
In storage area networking, Fabric OS is the firmware for Brocade Communications Systems's Fibre Channel switches and Fibre Channel directors. It is also known as FOS. First generation The first generation of Fabric OS was developed on top of a VxWorks kernel and was mainly used in the Brocade Silkworm 2000 and first 3000 series on Intel i960. Even today, many production environments are still running the older generation Silkworm models. Second generation The second generation of Fabric OS was developed on a PowerPC platform, and uses MontaVista Linux, a Linux derivative with real-time performance enhancements. With the advent of MontaVista, switches and directors have the ability of hot firmware activation (without downtime for Fibre Channel fabric), and many useful diagnostic commands. According to free software licenses terms, Brocade provides access to sources of distributed free software, on which Fabric OS and other Brocade's software products are based. Additional licensed products Additional products for Fabric OS are offered by Brocade for one-time fee. They are licensed for use in a single specific switch (license key is coupled with device's serial number). Those include: Integrated Routing Adaptive Networking: Quality of service, Ingress Rate Limiting Brocade Advanced Zoning (Free with rel 6.1.x) ISL trunking Ports on Demand Extended Fabrics (more than 10 km of switched fabric connectivity, up to 3000 km) Advanced Performance Monitoring (APM) Fabric Watch Secure Fabric OS (obsolete) VMWare VSPEX integration Versions Fabric OS 9.x 9.2: 9.1: Root Access Removal, NTP Server authentication 9.0: Traffic optimizer, Fabric congestion notification, New Web Tools (graphical UI switched from Java to Web) Fabric OS 8.x 8.2: NVMe capable + REST API 8.1: 8.0: Contains many new software features and enhancements as well as issue resolutions Fabric OS 7.x 7.4: Switch to Linux 3.10 kernel 7.3: 7.2: 7.1: 7.0: Fabric OS 6.x 6.4: 6.3: Fill
https://en.wikipedia.org/wiki/Oechsle%20scale
The Oechsle scale is a hydrometer scale measuring the density of grape must, which is an indication of grape ripeness and sugar content used in wine-making. It is named for Ferdinand Oechsle (1774–1852) and it is widely used in the German, Swiss and Luxembourgish wine-making industries. On the Oechsle scale, one degree Oechsle (°Oe) corresponds to one gram of the difference between the mass of one litre of must at 20 °C and 1 kg (the mass of 1 litre of water). For example, must with a specific mass of 1084 grams per litre has 84 °Oe. Overview The mass difference between equivalent volumes of must and water is almost entirely due to the dissolved sugar in the must. Since the alcohol in wine is produced by fermentation of the sugar, the Oechsle scale is used to predict the maximal possible alcohol content of the finished wine. This measure is commonly used to select when to harvest grapes. In the vineyard, the must density is usually measured by using a refractometer by crushing a few grapes between the fingers and letting the must drip onto the glass prism of the refractometer. In countries using the Oechsle scale, the refractometer will be calibrated in Oechsle degrees, but this is an indirect reading, as the refractometer actually measures the refractive index of the grape must, and translates it into Oechsle or different wine must scales, based on their relationship with refractive index. Wine classification The Oechsle scale forms the basis of most of the German wine classification. In the highest quality category, Prädikatswein (formerly known as Qualitätswein mit Prädikat, QmP), the wine is assigned a Prädikat based on the Oechsle reading of the must. The regulations set out minimum Oechsle readings for each Prädikat, which depend on wine-growing regions and grape variety: Kabinett – 70–85 °Oe Spätlese – 76–95 °Oe Auslese – 83–105 °Oe Beerenauslese and Eiswein – 110–128° Oe (Eiswein is made by late harvesting grapes after they have frozen on the vine and no
https://en.wikipedia.org/wiki/List%20of%20national%20animals
This is a list of countries that have officially designated one or more animals as their national animals. National animal See also State animal List of national birds Animals as heraldic charges Floral emblem National personification
https://en.wikipedia.org/wiki/Cut%20%28graph%20theory%29
In graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets. Any cut determines a cut-set, the set of edges that have one endpoint in each subset of the partition. These edges are said to cross the cut. In a connected graph, each cut-set determines a unique cut, and in some cases cuts are identified with their cut-sets rather than with their vertex partitions. In a flow network, an s–t cut is a cut that requires the source and the sink to be in different subsets, and its cut-set only consists of edges going from the source's side to the sink's side. The capacity of an s–t cut is defined as the sum of the capacity of each edge in the cut-set. Definition A cut is a partition of of a graph into two subsets and . The cut-set of a cut is the set of edges that have one endpoint in and the other endpoint in . If and are specified vertices of the graph , then an cut is a cut in which belongs to the set and belongs to the set . In an unweighted undirected graph, the size or weight of a cut is the number of edges crossing the cut. In a weighted graph, the value or weight is defined by the sum of the weights of the edges crossing the cut. A bond is a cut-set that does not have any other cut-set as a proper subset. Minimum cut A cut is minimum if the size or weight of the cut is not larger than the size of any other cut. The illustration on the right shows a minimum cut: the size of this cut is 2, and there is no cut of size 1 because the graph is bridgeless. The max-flow min-cut theorem proves that the maximum network flow and the sum of the cut-edge weights of any minimum cut that separates the source and the sink are equal. There are polynomial-time methods to solve the min-cut problem, notably the Edmonds–Karp algorithm. Maximum cut A cut is maximum if the size of the cut is not smaller than the size of any other cut. The illustration on the right shows a maximum cut: the size of the cut is equal to 5, and there is no cu
https://en.wikipedia.org/wiki/List%20of%20Fibre%20Channel%20standards
Fibre Channel 2005 FC-SATA (under development) FC-PI-2 INCITS 404 2004 FC-SP ANSI INCITS 1570-D FC-GS-4 (Fibre Channel Generic Services)ANSI INCITS 387. Includes the following standards: FC-GS-2 ANSI INCITS 288 (1999) FC-GS-3 ANSI INCITS 348 (2001) FC-SW-3 INCITS 384. Includes the following standards: FC-SW INCITS 321 (1998) FC-SW-2 INCITS 355 (2001) FC-DA INCITS TR-36. Includes the following standards: FC-FLA INCITS TR-20 (1998) FC-PLDA INCITS TR-19 (1998) 2003 FC-FS INCITS 373. Includes the following standards: FC-PH ANSI X3.230 (1994) FC-PH-2 ANSI X3.297 (1997) FC-PH-3 ANSI X3.303 (1998) FC-BB-2 INCITS 372 FC-SB-3 INCITS 374. Replaces: FC-SB ANSI X3.271 (1996) FC-SB-2 INCITS 374 (2001) 2002 FC-VI INCITS 357 FC-MI INCITS/TR-30 FC-PI INCITS 352 2001 FC-SB-2 INCITS 374. Replaced by: FC-SB-3 INCITS 374 (2003) FC-SW-2 INCITS 355. Replaced by: FC-SW-3 INCITS 384 (2004) FC-GS-3 ANSI INCITS 348. Replaced by: FC-GS-4 ANSI INCITS 387 (2004) 1999 FC-AL-2 INCITS 332 FC-TAPE INCITS TR-24 FC-GS-2 ANSI INCITS 288 (1999). Replaced by: FC-GS-4 ANSI INCITS 387 (2004) 1998 FC-PH-3 ANSI X3.303. Replaced by: FC-FS INCITS 373 (2003) FC-FLA INCITS TR-20. Replaced by: FC-DA INCITS TR-36 (2004) FC-PLDA INCITS TR-19. Replaced by: FC-DA INCITS TR-36 (2004) FC-SW INCITS 321. Replaced by: FC-SW-3 INCITS 384 (2004) 1997 FC-PH-2 ANSI X3.297. Replaced by: FC-FS INCITS 373 1996 FC-SB ANSI X3.271. Replaced by: FC-SB-3 INCITS 374 FC-AL ANSI X3.272 1994 FC-PH ANSI X3.230. Replaced by: FC-FS INCITS 373 (2003) Others: FC-LS: Fibre Channel Link Services FC-HBA API for Fibre Channel HBA management FC-GS-3 CT Fibre Channel Global Services Common Transport RFCs - Transmission of IPv6, IPv4, and Address Resolution Protocol (ARP) Packets over Fibre Channel, 2006 - Transmission of IPv6 Packets over Fibre Channel (Obsoleted by: RFC 4338) - IP and ARP over Fibre Channel (Obsoleted by: RFC 4338) - Securing Block Storage Protocols over IP SNMP-related specificatio
https://en.wikipedia.org/wiki/Cricothyroid%20ligament
The cricothyroid ligament (also known as the cricothyroid membrane or cricovocal membrane) is a ligament in the neck. It connects the cricoid cartilage to the thyroid cartilage. It prevents these cartilages from moving too far apart. It is cut during an emergency cricothyrotomy to treat upper airway obstruction. Structure The cricothyroid ligament is composed of two parts: the median cricothyroid ligament along the midline (a thickening of the cricothyroid membrane). It is a flat band of white connective tissue that connects the front parts of the contiguous margins of the cricoid and thyroid cartilages. It is a thick and strong ligament, narrow above and broad below. Each lateral ligament is known as the conus elasticus. the lateral cricothyroid ligaments on each side (these are also called conus elasticus). Each is overlapped on either side by laryngeal muscles. The conus elasticus (which means elastic cone in Latin) is the lateral portion of the cricothyroid ligament. The lateral portions are thinner and lie close under the mucous membrane of the larynx; they extend from the upper border of the cricoid cartilage to the lower margin of the vocal ligaments, with which they are continuous. The vocal ligaments may therefore be regarded as the free borders of each conus elasticus. They extend from the vocal processes of the arytenoid cartilages to the angle of the thyroid cartilage about midway between its upper and lower borders. Relations The prelaryngeal lymph node (also known as the Delphian lymph node) sits anterior to the median cricothyroid ligament. Function The cricothyroid ligament prevents the cricoid cartilage and the thyroid cartilage from moving too far apart. Clinical significance The cricothyroid ligament is cut during an emergency cricothyrotomy. This kind of surgical intervention is necessary during airway obstruction above the level of vocal folds. History The cricothyroid ligament is named after the two structures it connects: the cric
https://en.wikipedia.org/wiki/Donaldson%27s%20theorem
In mathematics, and especially differential topology and gauge theory, Donaldson's theorem states that a definite intersection form of a compact, oriented, smooth manifold of dimension 4 is diagonalisable. If the intersection form is positive (negative) definite, it can be diagonalized to the identity matrix (negative identity matrix) over the . The original version of the theorem required the manifold to be simply connected, but it was later improved to apply to 4-manifolds with any fundamental group. History The theorem was proved by Simon Donaldson. This was a contribution cited for his Fields medal in 1986. Idea of proof Donaldson's proof utilizes the moduli space of solutions to the anti-self-duality equations on a principal -bundle over the four-manifold . By the Atiyah–Singer index theorem, the dimension of the moduli space is given by where , is the first Betti number of and is the dimension of the positive-definite subspace of with respect to the intersection form. When is simply-connected with definite intersection form, possibly after changing orientation, one always has and . Thus taking any principal -bundle with , one obtains a moduli space of dimension five. This moduli space is non-compact and generically smooth, with singularities occurring only at the points corresponding to reducible connections, of which there are exactly many. Results of Clifford Taubes and Karen Uhlenbeck show that whilst is non-compact, its structure at infinity can be readily described. Namely, there is an open subset of , say , such that for sufficiently small choices of parameter , there is a diffeomorphism . The work of Taubes and Uhlenbeck essentially concerns constructing sequences of ASD connections on the four-manifold with curvature becoming infinitely concentrated at any given single point . For each such point, in the limit one obtains a unique singular ASD connection, which becomes a well-defined smooth ASD connection at that point using Uhlenbec
https://en.wikipedia.org/wiki/Qubit%20field%20theory
A qubit field theory is a quantum field theory in which the canonical commutation relations involved in the quantisation of pairs of observables are relaxed. Specifically, it is a quantum field theory in which, unlike most other quantum field theories, the pair of observables is not required to always commute. Theory In many ordinary quantum field theories, constraining one observable to a fixed value results in the uncertainty of the other observable being infinite (c.f. uncertainty principle), and as a consequence there is potentially an infinite amount of information involved. In the situation of the standard position-momentum commutation (where the uncertainty principle is most commonly cited), this implies that a fixed, finite, volume of space has an infinite capacity to store information. However, Bekenstein's bound hints that the information storage capacity ought to be finite. Qubit field theory seeks to resolve this issue by removing the commutation restriction, allowing the capacity to store information to be finite; hence the name qubit, which derives from quantum-bit or quantised-bit. David Deutsch has presented a group of qubit field theories which, despite not requiring commutation of certain observables, still presents the same observable results as ordinary quantum field theory. J. Hruby has presented a supersymmetric extension.
https://en.wikipedia.org/wiki/Link%20Control%20Protocol
In computer networking, the Link Control Protocol (LCP) forms part of the Point-to-Point Protocol (PPP), within the family of Internet protocols. In setting up PPP communications, both the sending and receiving devices send out LCP packets to determine the standards of the ensuing data transmission. The protocol: checks the identity of the linked device and either accepts or rejects the device determines the acceptable packet size for transmission searches for errors in configuration can terminate the link if requirements exceed the parameters Devices cannot use PPP to transmit data over a network until the LCP packet determines the acceptability of the link, but LCP packets are embedded into PPP packets and therefore a basic PPP connection has to be established before LCP can reconfigure it. LCP over PPP packets have control code 0xC021 and their info field contains the LCP packet, which has four fields (Code, ID, Length and Data). Code: Operation requested: configure link, terminate link, and acknowledge and deny codes Data: Parameters for the operation External links : PPP LCP Extensions : The Point-to-Point Protocol (PPP) : PPP Reliable Transmission Link protocols Internet Standards
https://en.wikipedia.org/wiki/Tarski%27s%20axioms
Tarski's axioms, due to Alfred Tarski, are an axiom set for the substantial fragment of Euclidean geometry that is formulable in first-order logic with identity, and requiring no set theory (i.e., that part of Euclidean geometry that is formulable as an elementary theory). Other modern axiomizations of Euclidean geometry are Hilbert's axioms and Birkhoff's axioms. Overview Early in his career Tarski taught geometry and researched set theory. His coworker Steven Givant (1999) explained Tarski's take-off point: From Enriques, Tarski learned of the work of Mario Pieri, an Italian geometer who was strongly influenced by Peano. Tarski preferred Pieri's system [of his Point and Sphere memoir], where the logical structure and the complexity of the axioms were more transparent. Givant then says that "with typical thoroughness" Tarski devised his system: What was different about Tarski's approach to geometry? First of all, the axiom system was much simpler than any of the axiom systems that existed up to that time. In fact the length of all of Tarski's axioms together is not much more than just one of Pieri's 24 axioms. It was the first system of Euclidean geometry that was simple enough for all axioms to be expressed in terms of the primitive notions only, without the help of defined notions. Of even greater importance, for the first time a clear distinction was made between full geometry and its elementary — that is, its first order — part. Like other modern axiomatizations of Euclidean geometry, Tarski's employs a formal system consisting of symbol strings, called sentences, whose construction respects formal syntactical rules, and rules of proof that determine the allowed manipulations of the sentences. Unlike some other modern axiomatizations, such as Birkhoff's and Hilbert's, Tarski's axiomatization has no primitive objects other than points, so a variable or constant cannot refer to a line or an angle. Because points are the only primitive objects, and because Tars
https://en.wikipedia.org/wiki/X-ray%20microtomography
In radiography, X-ray microtomography uses X-rays to create cross-sections of a physical object that can be used to recreate a virtual model (3D model) without destroying the original object. It is similar to tomography and X-ray computed tomography. The prefix micro- (symbol: µ) is used to indicate that the pixel sizes of the cross-sections are in the micrometre range. These pixel sizes have also resulted in creation of its synonyms high-resolution X-ray tomography, micro-computed tomography (micro-CT or µCT), and similar terms. Sometimes the terms high-resolution computed tomography (HRCT) and micro-CT are differentiated, but in other cases the term high-resolution micro-CT is used. Virtually all tomography today is computed tomography. Micro-CT has applications both in medical imaging and in industrial computed tomography. In general, there are two types of scanner setups. In one setup, the X-ray source and detector are typically stationary during the scan while the sample/animal rotates. The second setup, much more like a clinical CT scanner, is gantry based where the animal/specimen is stationary in space while the X-ray tube and detector rotate around. These scanners are typically used for small animals (in vivo scanners), biomedical samples, foods, microfossils, and other studies for which minute detail is desired. The first X-ray microtomography system was conceived and built by Jim Elliott in the early 1980s. The first published X-ray microtomographic images were reconstructed slices of a small tropical snail, with pixel size about 50 micrometers. Working principle Imaging system Fan beam reconstruction The fan-beam system is based on a one-dimensional (1D) X-ray detector and an electronic X-ray source, creating 2D cross-sections of the object. Typically used in human computed tomography systems. Cone beam reconstruction The cone-beam system is based on a 2D X-ray detector (camera) and an electronic X-ray source, creating projection images that late
https://en.wikipedia.org/wiki/Beam%20%28music%29
In musical notation, a beam is a horizontal or diagonal line used to connect multiple consecutive notes (and occasionally rests) to indicate rhythmic grouping. Only eighth notes (quavers) or shorter can be beamed. The number of beams is equal to the number of flags that would be present on an unbeamed note. Beaming refers to the conventions and use of beams. A primary beam connects a note group unbroken, while a secondary beam is interrupted or partially broken. Grouping Beam spans indicate rhythmic groupings, usually determined by the time signature. Therefore, beams do not usually cross bar lines or major subdivisions of bars. A single eighth note, or any faster note, is always stemmed with flags, while two or more are typically beamed in groups. In modern practice, beams may span across rests in order to make rhythmic groups clearer. In vocal music, beams were traditionally used only to connect notes sung to the same syllable. In modern practice it is more common to use standard beaming rules, while indicating multi-note syllables with slurs. Positioning Notes joined by a beam usually have all the stems pointing in the same direction (up or down). The average pitch of the notes is used to determine the direction – if the average pitch is below the middle staff-line, the stems and beams usually go above the notehead, otherwise they go below. The direction of beams usually follows the general direction of the notes it groups, slanting down if the notes go down, slanting up if the notes go up, and level if the first and last notes are the same. Feathered beaming Feathered beaming shows a gradual change in the speed of notes. It is shown with a primary straight beam and other diagonal secondary beams (that together resemble a feather, hence the name). These secondary beams suggest a gradual acceleration or deceleration from the first note value within the feathered beam to the last. (A beam getting wider from left to right shows acceleration.) The longest va
https://en.wikipedia.org/wiki/Jonathan%20Bowen
Jonathan P. Bowen FBCS FRSA (born 1956) is a British computer scientist and an Emeritus Professor at London South Bank University, where he headed the Centre for Applied Formal Methods. Prof. Bowen is also the Chairman of Museophile Limited and has been a Professor of Computer Science at Birmingham City University, Visiting Professor at the Pratt Institute (New York City), University of Westminster and King's College London, and a visiting academic at University College London. Early life and education Bowen was born in Oxford, the son of Humphry Bowen, and was educated at the Dragon School, Bryanston School, prior to his matriculation at University College, Oxford (Oxford University) where he received the MA degree in Engineering Science. Career Bowen later worked at Imperial College, London, the Oxford University Computing Laboratory (now the Oxford University Department of Computer Science), the University of Reading, and London South Bank University. His early work was on formal methods in general, and later the Z notation in particular. He was Chair of the Z User Group from the early 1990s until 2011. In 2002, Bowen was elected Chair of the British Computer Society FACS Specialist Group on Formal Aspects of Computing Science. Since 2005, Bowen has been an Associate Editor-in-Chief of the journal Innovations in Systems and Software Engineering. He is also an associate editor on the editorial board for the ACM Computing Surveys journal, covering software engineering and formal methods. From 2008–9, he was an Associate at Praxis High Integrity Systems, working on a large industrial project using the Z notation. Bowen's other major interest is the area of online museums. In 1994, he founded the Virtual Library museums pages (VLmp), an online museums directory that was soon adopted by the International Council of Museums (ICOM). In the same year he also started the Virtual Museum of Computing. In 2002, he founded Museophile Limited to help museums, especially onl
https://en.wikipedia.org/wiki/Guatemalan%20Forensic%20Anthropology%20Foundation
The Guatemalan Forensic Anthropology Foundation ( or FAFG) is an autonomous, non-profit, technical and scientific non-governmental organisation. Its aim is to strengthen the administration of justice and respect for human rights by investigating, documenting, and raising awareness about past instances of human rights violations, particularly unresolved murders, that occurred during Guatemala's 30-year-long Civil War. Its main tool in pursuing this goal is the application of forensic anthropology techniques in exhumations of clandestine mass graves. Its endeavours in this regard allow the relatives of the disappeared to recuperate the remains of their missing family members and to proceed with burials in accordance with their beliefs, and enable criminal prosecutions to be brought against the perpetrators. History In 1990 and 1991, various groups of survivors began to report to the authorities the existence of clandestine graves in their communities, most of which contained the bodies of Maya campesinos massacred during the "scorched earth" policy pursued by the government in the early 1980s. The forensic services of the Guatemalan judiciary began to investigate some of these cases, but they failed to pursue them to their conclusion. Consequently, in 1991, the survivors' groups contacted Dr. Clyde Snow, a renowned U.S. forensic anthropologist who had previously overseen exhumations in Argentina in the wake of that country's Dirty War and had helped found the Argentine Forensic Anthropology Team. Snow arrived in Guatemala, accompanied by forensic anthropologists from Argentina and Chile, and began the dual task of conducting the first exhumations and training the future members of the Guatemalan Forensic Anthropology Team (Equipo de Antropología Forense de Guatemala). The Team was supported in its early years by a donation from the American Association for the Advancement of Science of the United States, and its first director was Stefan Schmitt, who has sinc
https://en.wikipedia.org/wiki/Chorda%20tympani
Chorda tympani is a branch of the facial nerve that carries gustatory (taste) sensory innervation from the front of the tongue and parasympathetic (secretomotor) innervation to the submandibular and sublingual salivary glands. Chorda tympani has a complex course from the brainstem, through the temporal bone and middle ear, into the infratemporal fossa, and ending in the oral cavity. Structure Chorda tympani fibers emerge from the pons of the brainstem as part of the intermediate nerve of the facial nerve. The facial nerve exits the cranial cavity through the internal acoustic meatus and enters the facial canal. Within the facial canal, chorda tympani branches off the facial nerve and enters the lateral wall of the tympanic cavity within the middle ear, where it runs across the tympanic membrane (from posterior to anterior) and medial to the neck of the malleus. Chorda tympani then exits the skull by descending through the petrotympanic fissure into the infratemporal fossa. Here it joins the lingual nerve, a branch of the mandibular nerve (CN V3). Traveling with the lingual nerve, the fibers of chorda tympani enter the sublingual space to reach the anterior 2/3 of the tongue and submandibular ganglion. The special sensory fibers originate from the taste buds in the anterior 2/3 of the tongue and carry taste information to the nucleus of solitary tract of the brainstem, where taste information from facial, glossopharyngeal, and vagus nerves is integrated. The preganglionic parasympathetic fibers originate in the superior salivary nucleus of the brainstem and project to the submandibular ganglion to synapse with postganglionic fibers which go on to innervate the submandibular and sublingual salivary glands. Function The chorda tympani carries two types of nerve fibers from their origin from the facial nerve to the lingual nerve that carries them to their destinations: Special sensory fibers providing taste sensation from the anterior two-thirds of the tongue. Pr
https://en.wikipedia.org/wiki/Fourier-transform%20ion%20cyclotron%20resonance
Fourier-transform ion cyclotron resonance mass spectrometry is a type of mass analyzer (or mass spectrometer) for determining the mass-to-charge ratio (m/z) of ions based on the cyclotron frequency of the ions in a fixed magnetic field. The ions are trapped in a Penning trap (a magnetic field with electric trapping plates), where they are excited (at their resonant cyclotron frequencies) to a larger cyclotron radius by an oscillating electric field orthogonal to the magnetic field. After the excitation field is removed, the ions are rotating at their cyclotron frequency in phase (as a "packet" of ions). These ions induce a charge (detected as an image current) on a pair of electrodes as the packets of ions pass close to them. The resulting signal is called a free induction decay (FID), transient or interferogram that consists of a superposition of sine waves. The useful signal is extracted from this data by performing a Fourier transform to give a mass spectrum. History FT-ICR was invented by Melvin B. Comisarow and Alan G. Marshall at the University of British Columbia. The first paper appeared in Chemical Physics Letters in 1974. The inspiration was earlier developments in conventional ICR and Fourier-transform nuclear magnetic resonance (FT-NMR) spectrometry. Marshall has continued to develop the technique at The Ohio State University and Florida State University. Theory The physics of FTICR is similar to that of a cyclotron at least in the first approximation. In the simplest idealized form, the relationship between the cyclotron frequency and the mass-to-charge ratio is given by where f = cyclotron frequency, q = ion charge, B = magnetic field strength and m = ion mass. This is more often represented in angular frequency: where is the angular cyclotron frequency, which is related to frequency by the definition . Because of the quadrupolar electrical field used to trap the ions in the axial direction, this relationship is only approximate. The axial ele
https://en.wikipedia.org/wiki/Reid%20W.%20Barton
Reid William Barton (born May 6, 1983) is a mathematician and also one of the most successful performers in the International Science Olympiads. Biography Barton is the son of two environmental engineers. Barton took part-time classes at Tufts University in chemistry (5th grade), physics (6th grade), and subsequently Swedish, Finnish, French, and Chinese. Since eighth grade he worked part-time with MIT computer scientist Charles E. Leiserson on CilkChess, a computer chess program. Subsequently, he worked at Akamai Technologies with computer scientist Ramesh Sitaraman to build one of the earliest video performance measurement systems that have since become a standard in industry. After Akamai, Barton went to grad school at Harvard to pursue a Ph.D. in mathematics, which he completed in 2019 under the supervision of Michael J. Hopkins. Afterwards, he did research as a post-doctoral fellow at Pittsburgh. As of November 2021 he sits on the committee for the Mathematical and programming competitions Barton was the first student to win four gold medals at the International Mathematical Olympiad, culminating in full marks at the 2001 Olympiad held in Washington, D.C., shared with Gabriel Carroll, Xiao Liang and Zhang Zhiqiang. Barton is one of seven people to have placed among the five top ranked competitors (who are themselves not ranked against each other) in the William Lowell Putnam Competition four times (2001–2004). Barton was a member of the MIT team which finished second in 2001 and first in 2003 and 2004. Barton has won two gold medals at the International Olympiad in Informatics. In 2001 he finished first with 580 points out of 600, 55 ahead of his nearest competitor, the largest margin in IOI history at the time. Barton was a member of the 2nd and 5th place MIT team at the ACM International Collegiate Programming Contest, and reached the finals in the Topcoder Open (2004), semi-finals (2003, 2006), the TopCoder Collegiate Challenge (2004), semi-finals (2
https://en.wikipedia.org/wiki/Startle%20response
In animals, including humans, the startle response is a largely unconscious defensive response to sudden or threatening stimuli, such as sudden noise or sharp movement, and is associated with negative affect. Usually the onset of the startle response is a startle reflex reaction. The startle reflex is a brainstem reflectory reaction (reflex) that serves to protect vulnerable parts, such as the back of the neck (whole-body startle) and the eyes (eyeblink) and facilitates escape from sudden stimuli. It is found across many different species, throughout all stages of life. A variety of responses may occur depending on the affected individual's emotional state, body posture, preparation for execution of a motor task, or other activities. The startle response is implicated in the formation of specific phobias. Startle reflex Neurophysiology A startle reflex can occur in the body through a combination of actions. A reflex from hearing a sudden loud noise will happen in the primary acoustic startle reflex pathway consisting of three main central synapses, or signals that travel through the brain. First, there is a synapse from the auditory nerve fibers in the ear to the cochlear root neurons (CRN). These are the first acoustic neurons of the central nervous system. Studies have shown a direct correlation to the amount of decrease of the startle to the number of CRNs that were killed. Second, there is a synapse from the CRN axons to the cells in the nucleus reticularis pontis caudalis (PnC) of the brain. These are neurons that are located in the pons of the brainstem. A study done to disrupt this portion of the pathway by the injection of PnC inhibitory chemicals has shown a dramatic decrease in the amount of startle by about 80 to 90 percent. Third, a synapse occurs from the PnC axons to the motor neurons in the facial motor nucleus or the spinal cord that will directly or indirectly control the movement of muscles. The activation of the facial motor nucleus causes a j
https://en.wikipedia.org/wiki/Aluminide
An aluminide is a compound that has aluminium with other elements. Since aluminium is near the nonmetals on the periodic table, it can bond with metals differently from other metals. The properties of an aluminide are between those of a metal alloy and those of an ionic compound. Examples Magnesium aluminide, MgAl Titanium aluminide, TiAl Iron aluminides, including Fe3Al and FeAl Nickel aluminide, Ni3Al See category for a list.
https://en.wikipedia.org/wiki/Spaceland%20%28novel%29
Spaceland is a science fiction novel by American mathematician and computer scientist Rudy Rucker, and published in 2002 by Tor Books. In a tribute to Edwin Abbott's Flatland, a classic mathematical fantasy about a 2-dimensional being (A. Square) who receives a surprise visit from a higher-dimensional sphere, Rudy Rucker's Spaceland describes the life of Joe Cube, an average, modern-day Silicon Valley hotshot who one day discovers the fourth dimension from an unexpected visitation. Plot summary Joe Cube is a high tech executive waiting for his company's IPO. On the New Year's Eve before the new millennium, trying to impress his wife Jena, he brings home a prototype of his company's new product (a TV screen that turns standard television broadcasting into a 3D image). It brings no warmth to their cooling marriage, but it does attract the attention of somebody else. Joe is suddenly contacted by a Momo, a woman from the fourth dimension she calls the All, of which our entire world (which she calls Spaceland) is like nothing but the thin surface of a rug. Momo has a business proposition for Joe that she won't let him refuse. She is bent on making him start a company that will create a specific product that she will supply. The upside potential becomes much clearer for Joe once Momo "augments" him, by helping him grow a new eye on a 4D stalk, giving him the power to see in four-dimensional directions, as well as the ability to see into our dimension using a four-dimension perspective. Reception Strange Horizons felt that Joe's adventures were "thought-provoking", and compared the book positively to Ian Stewart's Flatterland, but faulted it for lacking in mathematical rigor. The A.V. Club considered it "fun yet thoughtful" and "unusually sedate", but criticized Rucker for his characterization. Publishers Weekly called it "a hilarious tribute (to Flatland); Kirkus Reviews, however, found it to be "not funny, not fascinating" and "for fans only", and the Notices of the
https://en.wikipedia.org/wiki/DEC%20RADIX%2050
RADIX 50 or RAD50 (also referred to as RADIX50, RADIX-50 or RAD-50), is an uppercase-only character encoding created by Digital Equipment Corporation (DEC) for use on their DECsystem, PDP, and VAX computers. RADIX 50's 40-character repertoire (050 in octal) can encode six characters plus four additional bits into one 36-bit machine word (PDP-6, PDP-10/DECsystem-10, DECSYSTEM-20), three characters plus two additional bits into one 18-bit word (PDP-9, PDP-15), or three characters into one 16-bit word (PDP-11, VAX). The actual encoding differs between the 36-bit and 16-bit systems. 36-bit systems In 36-bit DEC systems RADIX 50 was commonly used in symbol tables for assemblers or compilers which supported six-character symbol names from a 40-character alphabet. This left four bits to encode properties of the symbol. For its similarities to the SQUOZE encoding scheme used in IBM's SHARE Operating System for representing object code symbols, DEC's variant was also sometimes called DEC Squoze, however, IBM SQUOZE packed six characters of a 50-character alphabet plus two additional flag bits into one 36-bit word. RADIX 50 was not normally used in 36-bit systems for encoding ordinary character strings; file names were normally encoded as six six-bit characters, and full ASCII strings as five seven-bit characters and one unused bit per 36-bit word. 18-bit systems RADIX 50 (also called Radix 508 format) was used in Digital's 18-bit PDP-9 and PDP-15 computers to store symbols in symbol tables, leaving two extra bits per 18-bit word ("symbol classification bits"). 16-bit systems Some strings in DEC's 16-bit systems were encoded as 8-bit bytes, while others used RADIX 50 (then also called MOD40). In RADIX 50, strings were encoded in successive words as needed, with the first character within each word located in the most significant position. For example, using the PDP-11 encoding, the string "ABCDEF", with character values 1, 2, 3, 4, 5, and 6, would be encoded as a wor
https://en.wikipedia.org/wiki/Sleep%20mode
Sleep mode (or suspend to RAM) is a low power mode for electronic devices such as computers, televisions, and remote controlled devices. These modes save significantly on electrical consumption compared to leaving a device fully on and, upon resume, allow the user to avoid having to reissue instructions or to wait for a machine to boot. Many devices signify this power mode with a pulsed or red colored LED power light. Computers In computers, entering a sleep state is roughly equivalent to "pausing" the state of the machine. When restored, the operation continues from the same point, having the same applications and files open. Sleep Sleep mode has gone by various names, including Stand By, Suspend and Suspend to RAM. Machine state is held in RAM and, when placed in sleep mode, the computer cuts power to unneeded subsystems and places the RAM into a minimum power state, just sufficient to retain its data. Because of the large power saving, most laptops automatically enter this mode when the computer is running on batteries and the lid is closed. If undesired, the behavior can be altered in the operating system settings of the computer. A computer must consume some energy while sleeping in order to power the RAM and to be able to respond to a wake-up event. A sleeping PC is on standby power, and this is covered by regulations in many countries, for example in the United States limiting such power under the One Watt Initiative, from 2010. In addition to a wake-up press of the power button, PCs can also respond to other wake cues, such as from keyboard, mouse, incoming telephone call on a modem, or local area network signal. Hibernation Hibernation, also called Suspend to Disk on Linux, saves all computer operational data on the fixed disk before turning the computer off completely. On switching the computer back on, the computer is restored to its state prior to hibernation, with all programs and files open, and unsaved data intact. In contrast with standby mode
https://en.wikipedia.org/wiki/Push%20Access%20Protocol
Push Access Protocol (or PAP) is a protocol defined in WAP-164 of the Wireless Application Protocol (WAP) suite from the Open Mobile Alliance. PAP is used for communicating with the Push Proxy Gateway, which is usually part of a WAP Gateway. PAP is intended for use in delivering content from Push Initiators to Push Proxy Gateways for subsequent delivery to narrow band devices, including mobile phones and pagers. Example messages include news, stock quotes, weather, traffic reports, and notification of events such as email arrival. With Push functionality, users are able to receive information without having to request it. In many cases it is important for the user to get the information as soon as it is available. The Push Access Protocol is not intended for use over the air. PAP is designed to be independent of the underlying transport protocol. PAP specifies the following possible operations between the Push Initiator and the Push Proxy Gateway: Submit a Push Cancel a Push Query for status of a Push Query for wireless device capabilities Result notification The interaction between the Push Initiators and the Push Proxy Gateways is in the form of XML messages. Operations Push Submission The purpose of the Push Submission is to deliver a Push message from a Push Initiator to a PPG, which should then deliver the message to a user agent in a device on the wireless network. The Push message contains a control entity and a content entity, and MAY contain a capabilities entity. The control entity is an XML document that contains control information (push-message) for the PPG to use in processing the message for delivery. The content entity represents content to be sent to the wireless device. The capabilities entity contains client capabilities assumed by the Push Initiator and is in the RDF [RDF] format as defined in the User Agent Profile [UAPROF]. The PPG MAY use the capabilities information to validate that the message is appropriate for the cli
https://en.wikipedia.org/wiki/Conditional%20access
Conditional access (CA) is a term commonly used in relation to software and to digital television systems. Conditional access is that ‘just-in-time’ evaluation to ensure the person who is seeking access to content is authorized to access the content. Said another way, conditional access is a type of access management. Access is managed by requiring certain criteria to be met before granting access to the content. In software Conditional access is a function that lets you manage people’s access to the software in question, such as email, applications, and documents. It is usually offered as SaaS (Software-as-a-Service) and deployed in organizations to keep company data safe. By setting conditions on the access to this data, the organization has more control over who accesses the data and where and in what way the information is accessed. When setting up conditional access, access can be limited to or prevented based on the policy defined by the system administrator. For example, a policy might require access is available from certain networks, or access is blocked when a specific web browser is requesting the access. In digital television Under the Digital Video Broadcasting (DVB) standard, conditional access system (CAS) standards are defined in the specification documents for DVB-CA (conditional access), DVB-CSA (the common scrambling algorithm) and DVB-CI (the Common Interface). These standards define a method by which one can obfuscate a digital-television stream, with access provided only to those with valid decryption smart-cards. The DVB specifications for conditional access are available from the standards page on the DVB website. This is achieved by a combination of scrambling and encryption. The data stream is scrambled with a 48-bit secret key, called the control word. Knowing the value of the control word at a given moment is of relatively little value, as under normal conditions, content providers will change the control word several times per minut
https://en.wikipedia.org/wiki/Josh%20Fisher
Joseph A "Josh" Fisher is an American and Spanish computer scientist noted for his work on VLIW architectures, compiling, and instruction-level parallelism, and for the founding of Multiflow Computer. He is a Hewlett-Packard Senior Fellow (Emeritus). Biography Fisher holds a BA (1968) in mathematics (with honors) from New York University and obtained a Master's and PhD degree (1979) in Computer Science from The Courant Institute of Mathematics of New York University. Fisher joined the Yale University Department of Computer Science in 1979 as an assistant professor, and was promoted to associate professor in 1983. In 1984 Fisher left Yale to found Multiflow Computer with Yale colleagues John O'Donnell and John Ruttenberg. Fisher joined HP Labs upon the closing of Multiflow in 1990. He directed HP Labs in Cambridge, MA USA from its founding in 1994, and became an HP Fellow (2000) and then Senior Fellow (2002) upon the inception of those titles at Hewlett-Packard. Fisher retired from HP Labs in 2006. Fisher is married (1967) to Elizabeth Fisher; they have a son, David Fisher, and a daughter, Dora Fisher. He holds Spanish citizenship due to his Sephardic heritage. Work Trace Scheduling In his Ph.D. dissertation, Fisher created the Trace Scheduling compiler algorithm and coined the term Instruction-level parallelism to characterize VLIW, superscalar, dataflow and other architecture styles that involve fine-grained parallelism among simple machine-level instructions. Trace scheduling was the first practical algorithm to find large amounts of parallelism between instructions that occupied different basic blocks. This greatly increased the potential speed-up for instruction-level parallel architectures. The VLIW architecture style Because of the difficulty of applying trace scheduling to idiosyncratic systems (such as 1970s-era DSPs) that in theory should have been suitable targets for a trace scheduling compiler, Fisher put forward the VLIW architectural style. VLIW
https://en.wikipedia.org/wiki/Boiling-point%20elevation
Boiling-point elevation describes the phenomenon that the boiling point of a liquid (a solvent) will be higher when another compound is added, meaning that a solution has a higher boiling point than a pure solvent. This happens whenever a non-volatile solute, such as a salt, is added to a pure solvent, such as water. The boiling point can be measured accurately using an ebullioscope. Explanation The boiling point elevation is a colligative property, which means that it is dependent on the presence of dissolved particles and their number, but not their identity. It is an effect of the dilution of the solvent in the presence of a solute. It is a phenomenon that happens for all solutes in all solutions, even in ideal solutions, and does not depend on any specific solute–solvent interactions. The boiling point elevation happens both when the solute is an electrolyte, such as various salts, and a nonelectrolyte. In thermodynamic terms, the origin of the boiling point elevation is entropic and can be explained in terms of the vapor pressure or chemical potential of the solvent. In both cases, the explanation depends on the fact that many solutes are only present in the liquid phase and do not enter into the gas phase (except at extremely high temperatures). Put in vapor pressure terms, a liquid boils at the temperature when its vapor pressure equals the surrounding pressure. For the solvent, the presence of the solute decreases its vapor pressure by dilution. A nonvolatile solute has a vapor pressure of zero, so the vapor pressure of the solution is less than the vapor pressure of the solvent. Thus, a higher temperature is needed for the vapor pressure to reach the surrounding pressure, and the boiling point is elevated. Put in chemical potential terms, at the boiling point, the liquid phase and the gas (or vapor) phase have the same chemical potential (or vapor pressure) meaning that they are energetically equivalent. The chemical potential is dependent on the temper
https://en.wikipedia.org/wiki/Storage%20Management%20Initiative%20%E2%80%93%20Specification
The Storage Management Initiative Specification, commonly called SMI-S, is a computer data storage management standard developed and maintained by the Storage Networking Industry Association (SNIA). It has also been ratified as an ISO standard. SMI-S is based upon the Common Information Model and the Web-Based Enterprise Management standards defined by the Distributed Management Task Force, which define management functionality via HTTP. The most recent approved version of SMI-S is available on the SNIA website. The main objective of SMI-S is to enable broad interoperable management of heterogeneous storage vendor systems. The current version is SMI-S 1.8.0 Rev 5. Over 1,350 storage products are certified as conformant to SMI-S. Basic concepts SMI-S defines CIM management profiles for storage systems. The entire SMI Specification is categorized in profiles and subprofiles. A profile describes the behavioral aspects of an autonomous, self-contained management domain. SMI-S includes profiles for Arrays, Switches, Storage Virtualizers, Volume Management and several other management domains. In DMTF parlance, an SMI-S provider is an implementation for a specific profile or set of profiles. A subprofile describes a part of a management domain, and can be a common part in more than one profile. At a very basic level, SMI-S entities are divided into two categories: Clients are management software applications that can reside virtually anywhere within a network, provided they have a communications link (either within the data path or outside the data path) to providers. Servers are the devices under management. Servers can be disk arrays, virtualization engines, host bus adapters, switches, tape drives, etc. SMI-S timeline 2000 – Collection of computer storage industry leaders led by Roger Reich begins building an interoperable management backbone for storage and storage networks (named Bluefin) in a small consortia called the Partner Development Process. 2002 – B