source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Basal%20%28phylogenetics%29
In phylogenetics, basal is the direction of the base (or root) of a rooted phylogenetic tree or cladogram. The term may be more strictly applied only to nodes adjacent to the root, or more loosely applied to nodes regarded as being close to the root. Note that extant taxa that lie on branches connecting directly to the root are not more closely related to the root than any other extant taxa. While there must always be two or more equally "basal" clades sprouting from the root of every cladogram, those clades may differ widely in taxonomic rank, species diversity, or both. If C is a basal clade within D that has the lowest rank of all basal clades within D, C may be described as the basal taxon of that rank within D. The concept of a 'key innovation' implies some degree of correlation between evolutionary innovation and diversification. However, such a correlation does not make a given case predicable, so ancestral characters should not be imputed to the members of a less species-rich basal clade without additional evidence. In general, clade A is more basal than clade B if B is a subgroup of the sister group of A or of A itself. Within large groups, "basal" may be used loosely to mean 'closer to the root than the great majority of', and in this context terminology such as "very basal" may arise. A 'core clade' is a clade representing all but the basal clade(s) of lowest rank within a larger clade; e.g., core eudicots. Of course, no extant taxon is closer to the root than any other, by definition. Usage A basal group in the stricter sense forms a sister group to the rest of the larger clade, as in the following case: While it is easy to identify a basal clade in such a cladogram, the appropriateness of such an identification is dependent on the accuracy and completeness of the diagram. It is often assumed in this example that the terminal branches of the cladogram depict all the extant taxa of a given rank within the clade; this is one reason the term basal is hi
https://en.wikipedia.org/wiki/Purine%20metabolism
Purine metabolism refers to the metabolic pathways to synthesize and break down purines that are present in many organisms. Biosynthesis Purines are biologically synthesized as nucleotides and in particular as ribotides, i.e. bases attached to ribose 5-phosphate. Both adenine and guanine are derived from the nucleotide inosine monophosphate (IMP), which is the first compound in the pathway to have a completely formed purine ring system. IMP Inosine monophosphate is synthesized on a pre-existing ribose-phosphate through a complex pathway (as shown in the figure on the right). The source of the carbon and nitrogen atoms of the purine ring, 5 and 4 respectively, come from multiple sources. The amino acid glycine contributes all its carbon (2) and nitrogen (1) atoms, with additional nitrogen atoms from glutamine (2) and aspartic acid (1), and additional carbon atoms from formyl groups (2), which are transferred from the coenzyme tetrahydrofolate as 10-formyltetrahydrofolate, and a carbon atom from bicarbonate (1). Formyl groups build carbon-2 and carbon-8 in the purine ring system, which are the ones acting as bridges between two nitrogen atoms. A key regulatory step is the production of 5-phospho-α-D-ribosyl 1-pyrophosphate (PRPP) by ribose phosphate pyrophosphokinase, which is activated by inorganic phosphate and inactivated by purine ribonucleotides. It is not the committed step to purine synthesis because PRPP is also used in pyrimidine synthesis and salvage pathways. The first committed step is the reaction of PRPP, glutamine and water to 5'-phosphoribosylamine (PRA), glutamate, and pyrophosphate - catalyzed by amidophosphoribosyltransferase, which is activated by PRPP and inhibited by AMP, GMP and IMP. PRPP + L-Glutamine + H2O → PRA + L-Glutamate + PPi In the second step react PRA, glycine and ATP to create GAR, ADP, and pyrophosphate - catalyzed by phosphoribosylamine—glycine ligase (GAR synthetase). Due to the chemical lability of PRA, which has a half-li
https://en.wikipedia.org/wiki/Xanthosine%20monophosphate
Xanthosine monophosphate also called Xanthylate is an intermediate in purine metabolism. It is a ribonucleoside monophosphate. It is formed from IMP via the action of IMP dehydrogenase, and it forms GMP via the action of GMP synthase. Also, XMP can be released from XTP by enzyme deoxyribonucleoside triphosphate pyrophosphohydrolase containing (d)XTPase activity. It is abbreviated XMP. See also Xanthosine
https://en.wikipedia.org/wiki/FITkit
FITkit is an immunological test for measuring natural rubber latex (NRL) allergens from a variety of rubber products, such as gloves. Description FITkit is a method for quantification of the major natural rubber latex (NRL) specific allergens: Hev b 1, Hev b 3, Hev b 5 and Hev b 6.02. The sum of four major allergens shows the allergenic potential of NRL products like gloves, condoms, teats, balloons, etc. These tests are based on the enzyme immunometric assay technique and use specific monoclonal antibodies developed against the clinically relevant latex allergens present in NRL products. FITkit is known also under scientific names EIA (enzyme immunoassay) or IEMA (immuno-enzymometric assay). The main value of FITkit technology is the focus only on those NRL allergens that are responsible for the majority of NRL sensitivity and allergy cases. Based on FITkit results, allergenicity potential of the tested product can be easily assessed. FITkit technology is compliant with the ASTM International standard D7427-08 FITkit is a trade mark of Icosagen AS (formerly Quattromed Ltd).
https://en.wikipedia.org/wiki/Adenylosuccinate
Adenylosuccinate is an intermediate in the interconversion of purine nucleotides inosine monophosphate (IMP) and adenosine monophosphate (AMP). The enzyme adenylosuccinate synthase carries out the reaction by the addition of aspartate to IMP and requires the input of energy from a phosphoanhydride bond in the form of guanosine triphosphate (GTP). GTP is used instead of adenosine triphosphate (ATP), so the reaction is not dependent on its products. See also Adenylosuccinate lyase deficiency Purine nucleotide cycle
https://en.wikipedia.org/wiki/Software%20product%20line
Software product lines (SPLs), or software product line development, refers to software engineering methods, tools and techniques for creating a collection of similar software systems from a shared set of software assets using a common means of production. The Carnegie Mellon Software Engineering Institute defines a software product line as "a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way." Description Manufacturers have long employed analogous engineering techniques to create a product line of similar products using a common factory that assembles and configures parts designed to be reused across the product line. For example, automotive manufacturers can create unique variations of one car model using a single pool of carefully designed parts and a factory specifically designed to configure and assemble those parts. The characteristic that distinguishes software product lines from previous efforts is predictive versus opportunistic software reuse. Rather than put general software components into a library in the hope that opportunities for reuse will arise, software product lines only call for software artifacts to be created when reuse is predicted in one or more products in a well defined product line. Recent advances in the software product line field have demonstrated that narrow and strategic application of these concepts can yield order of magnitude improvements in software engineering capability. The result is often a discontinuous jump in competitive business advantage, similar to that seen when manufacturers adopt mass production and mass customization paradigms. Development While early software product line methods at the genesis of the field provided the best software engineering improvement metrics seen in four decades, the latest generation of software product line me
https://en.wikipedia.org/wiki/Lesser%20Antillean%20macaw
The Lesser Antillean macaw or Guadeloupe macaw (Ara guadeloupensis) is a hypothetical extinct species of macaw that is thought to have been endemic to the Lesser Antillean island region of Guadeloupe. In spite of the absence of conserved specimens, many details about the Lesser Antillean macaw are known from several contemporary accounts, and the bird is the subject of some illustrations. Austin Hobart Clark described the species on the basis of these accounts in 1905. Due to the lack of physical remains, and the possibility that sightings were of macaws from the South American mainland, doubts have been raised about the existence of this species. A phalanx bone from the island of Marie-Galante confirmed the existence of a similar-sized macaw inhabiting the region prior to the arrival of humans and was correlated with the Lesser Antillean macaw in 2015. Later that year, historical sources distinguishing between the red macaws of Guadeloupe and the scarlet macaw (A. macao) of the mainland were identified, further supporting its validity. According to contemporary descriptions, the body of the Lesser Antillean macaw was red and the wings were red, blue and yellow. The tail feathers were between 38 and 51 cm (15 and 20 in) long. Apart from the smaller size and the all-red coloration of the tail feathers, it resembled the scarlet macaw and may, therefore, have been a close relative of that species. The bird ate fruitincluding the poisonous manchineel, was monogamous, nested in trees and laid two eggs once or twice a year. Early writers described it as being abundant in Guadeloupe, but it was becoming rare by 1760, and only survived in uninhabited areas. Disease and hunting by humans are thought to have eradicated it shortly afterward. The Lesser Antillean macaw is one of 13 extinct macaw species that have been proposed to have lived in the Caribbean islands. Many of these species are now considered dubious because only three are known from physical remains, and there a
https://en.wikipedia.org/wiki/Laurence%20Dwight%20Smith
Laurence Dwight Smith (1895-1952) was an American author specializing in crime fiction and cryptography. Early life and education Smith was born in Detroit, Michigan on January 24, 1895. After completing preparatory school at the Phillips Academy in 1914, he attended Yale College and, after graduating, took a job with the Winchester Repeating Arms Company in New Haven, Connecticut as a machinist. He married Kathryn Marsh of New York City in August 1917, and in January 1918 he enlisted in the U.S. Army. Having been promoted to sergent in the Corps of Intelligence Police during World War I, he was discharged after the war in September 1919 at the age of 24. Works Fiction Death is thy neighbour, 1938 Girl Hunt Red Arrow Books, 1939 The G-Men Smash the Professor's Gang, Illustrated by Robb Beebe, 1936, Grosset & Dunlap The G Men in Jeopardy. Illustrated by Milton Marx. 1938. Grosset & Dunlap. The G-Men trap the Spy Ring Illustrated by Paul Laune. 1939. Grosset & Dunlap Mystery of the Yellow Tie, 1939 Hiram and other Cats, Grosset & Dunlap, 1941 Adirondack Adventure, 1945 Reunion, Samuel Curl Inc, 1946 Non-fiction Cryptography - the science of secret writing, W. W. Norton & Company, New York, 1943 (US); George Allen and Unwin, London, 1944 (UK) Hooked - Narcotics: America's Peril, with Rafael de Soto, 1953 Cryptography W.W Norton and Co, 1971 Counterfeiting - crime against the people, 1944
https://en.wikipedia.org/wiki/PERMIS
PERMIS (PrivilEge and Role Management Infrastructure Standards) is a sophisticated policy-based authorization system that implements an enhanced version of the U.S. National Institute of Standards and Technology (NIST) standard Role-Based Access Control (RBAC) model. PERMIS supports the distributed assignment of both roles and attributes to users by multiple distributed attribute authorities, unlike the NIST model which assumes the centralised assignment of roles to users. PERMIS provides a cryptographically secure privilege management infrastructure (PMI) using public key encryption technologies and X.509 Attribute certificates to maintain users' attributes. PERMIS does not provide any authentication mechanism, but leaves it up to the application to determine what to use. PERMIS's strength comes from its ability to be integrated into virtually any application and any authentication scheme like Shibboleth (Internet2), Kerberos, username/passwords, Grid proxy certificates and Public Key Infrastructure (PKI). As a standard RBAC system, PERMIS's main entities are an authorisation policy, a set of users, a set of administrators (attribute authorities) who assign roles/attributes to users, a set of resources that are to be protected, a set of actions on resources, a set of access control rules, and optional obligations and constraints. The PERMIS policy is eXtensible Markup Language (XML)-based and has rules for user-role assignments and role-privilege assignments, the latter containing optional obligations that are returned to the application when a user is granted access to a resource. A PERMIS policy can be stored as either a simple text XML file, or as an attribute within a signed X.509 attribute certificate to provide integrity protection and tampering detection. User roles and attributes may be held in secure signed X.509 attributes certificates, and stored in Lightweight Directory Access Protocol (LDAP) directories or Web-based Distributed Authorin
https://en.wikipedia.org/wiki/Ventral%20cochlear%20nucleus
In the ventral cochlear nucleus (VCN), auditory nerve fibers enter the brain via the nerve root in the VCN. The ventral cochlear nucleus is divided into the anterior ventral (anteroventral) cochlear nucleus (AVCN) and the posterior ventral (posteroventral) cochlear nucleus (PVCN). In the VCN, auditory nerve fibers bifurcate, the ascending branch innervates the AVCN and the descending branch innervates the PVCN and then continue to the dorsal cochlear nucleus. The orderly innervation by auditory nerve fibers gives the AVCN a tonotopic organization along the dorsoventral axis. Fibers that carry information from the apex of the cochlea that are tuned to low frequencies contact neurons in the ventral part of the AVCN; those that carry information from the base of the cochlea that are tuned to high frequencies contact neurons in the dorsal part of the AVCN. Several populations of neurons populate the AVCN. Bushy cells receive input from auditory nerve fibers through particularly large endings called end bulbs of Held. They contact stellate cells through more conventional boutons. Cell types The anterior cochlear nucleus contains several cell types, which correspond fairly well with different physiological unit types. Additionally, these cell types generally have specific projection patterns. Bushy cells Named due to the branching, tree-like, nature of their dendritic fields, visible using Golgi's method, they receive large end bulbs of Held from auditory nerve fibers. Bushy cells are of three subtypes that project to different target nuclei in the superior olivary complex. Globular Globular bushy cells project large axons to the contralateral medial nucleus of the trapezoid body (MNTB), in the superior olivary complex where they synapse onto principal cells via a single calyx of Held, and several smaller collaterals synapse ipsilaterally in the posterior (PPO) and dorsolateral periolivary (DLPO) nuclei, lateral superior olive (LSO), and lateral nucleus of the tra
https://en.wikipedia.org/wiki/Intel%208255
The Intel 8255 (or i8255) Programmable Peripheral Interface (PPI) chip was developed and manufactured by Intel in the first half of the 1970s for the Intel 8080 microprocessor. The 8255 provides 24 parallel input/output lines with a variety of programmable operating modes. The 8255 is a member of the MCS-85 family of chips, designed by Intel for use with their 8085 and 8086 microprocessors and their descendants. It was first available in a 40-pin DIP and later a 44-pin PLCC packages. It found wide applicability in digital processing systems and was later cloned by other manufacturers. The 82C55 is a CMOS version for higher speed and lower current consumption. The functionality of the 8255 is now mostly embedded in larger VLSI processing chips as a sub-function. A CMOS version of the 8255 is still being made by Renesas but mostly used to expand the I/O of microcontrollers. Similar chips The 8255 has a similar function to the Motorola 6820 PIA (Peripheral Interface Adapter) from the Motorola 6800 family, also originally packaged as 40-pin DIL. The 8255 provides 24 I/O pins with four programmable direction bits: one for Port A(7:0) (i.e., all pins in the port), one for Port B(7:0), one for Port C(3:0) and one for Port C(7:4). By contrast, the Motorola and MOS chips provide only 16 I/O pins plus 4 control pins, but the Motorola/MOS chips allow the direction (input or output) of all I/O pins to be individually programmed. Both have configurations that will do a certain amount of automatic handshaking and interrupt generation. Other comparable microprocessor I/O chips are the 2655 Programmable Peripheral Interface from the Signetics 2650 family, the Z80 PIO, the Western Design Center WDC 65C21 (equivalent to the Motorola 6820/6821), and the MOS Technology 6522 VIA and 6526 CIA which had considerable additional functionality such as timers and shift registers. Variants The industrial grade version of Intel ID8255A was available for US$17.55 in quantities of 100 and
https://en.wikipedia.org/wiki/Aquarius%20%28SAC-D%20instrument%29
Aquarius was a NASA instrument aboard the Argentine SAC-D spacecraft. Its mission was to measure global sea surface salinity to better predict future climate conditions. Aquarius was shipped to Argentina on June 1, 2009 to be mounted in the INVAP built SAC-D satellite. It came back to Vandenberg Air Force Base on March 31, 2011. For the joint mission, Argentina provided the SAC-D spacecraft and additional science instruments, while NASA provided the Aquarius salinity sensor and the rocket launch platform. The National Aeronautics and Space Administration (NASA)'s Jet Propulsion Laboratory in Pasadena, California, managed the Aquarius Mission development for NASA's Earth Science Enterprise based in Washington, D.C., and NASA's Goddard Spaceflight Center in Greenbelt, Maryland, is managing the mission after launch. The observatory was successfully launched from Vandenberg Air Force Base on June 10, 2011. After its launch aboard a Delta II from Vandenberg Air Force Base in California, SAC-D was carried into a 657 km (408 mi) Sun-synchronous orbit to begin its 3-year mission. On June 7, 2015, the SAC-D satellite carrying Aquarius suffered a power supply failure, ending the mission. Background and instrumentation The spacecraft's mission is a joint program between the National Aeronautics and Space Administration (NASA) and Argentina's space agency, Comisión Nacional de Actividades Espaciales (CONAE). The Aquarius sensors are flown on the (now inoperative) Satélite de Applicaciones Científicas (SAC)-D spacecraft 657 kilometers (408 miles) above earth in a sun-synchronous, polar orbit that repeats itself once a week. Its instrument resolution was 150 kilometers (93 miles). Aquarius objective was to provide insight into the effect of salt on the Earth's weather and climate systems by making the first space based observations of variations in salinity and creating global ocean salinity distribution maps. Data from the instrument will be able to show changes in the oc
https://en.wikipedia.org/wiki/Stoddard%20engine
Elliott J. Stoddard invented and patented two versions of the Stoddard engine, the first in 1919 and the second in 1933. The general engine classification is an external combustion engine with valves and single-phase gaseous working fluid (i.e. a "hot air engine"). The internal working fluid was originally air, although in modern versions, other gases such as helium or hydrogen may be used. One potential thermodynamic advantage of using valves is to minimize the adverse effects of "unswept volume" in the heat exchangers (sometimes called "dead volume"), which is known to reduce engine efficiency and power output in the valveless Stirling engine. The 1919 Stoddard engine The generalized thermodynamic processes of the 1919 Stoddard cycle are: Adiabatic compression Isobaric heat-addition Adiabatic expansion Isobaric heat-removal The engine design in the patent was using a scotch yoke. The 1933 Stoddard engine In the 1933 design, Stoddard reduced the internal volume of the heat exchangers maintaining the same generalized thermodynamic processes as the 1919 cycle. Gallery
https://en.wikipedia.org/wiki/Vascular%20lacuna
The vascular lacuna (Latin: lacuna vasorum (retroinguinalis)) is the medial compartment beneath the inguinal ligament. It is separated from the lateral muscular lacuna by the iliopectineal arch. It gives passage to the femoral vessels, lymph vessels and lymph nodes. The lacunar ligament can be a site of entrapment for femoral hernias. Anatomy Its boundaries are the iliopectineal arch, the inguinal ligament, the lacunar ligament, and the superior border of the pubis. Contents The structures found in the vascular lacuna, from medial to lateral, are: Cloquet's node; Femoral vein; Femoral artery; and Femoral branch of the genitofemoral nerve
https://en.wikipedia.org/wiki/Bond%20order%20potential
Bond order potential is a class of empirical (analytical) interatomic potentials which is used in molecular dynamics and molecular statics simulations. Examples include the Tersoff potential, the EDIP potential, the Brenner potential, the Finnis–Sinclair potentials, ReaxFF, and the second-moment tight-binding potentials. They have the advantage over conventional molecular mechanics force fields in that they can, with the same parameters, describe several different bonding states of an atom, and thus to some extent may be able to describe chemical reactions correctly. The potentials were developed partly independently of each other, but share the common idea that the strength of a chemical bond depends on the bonding environment, including the number of bonds and possibly also angles and bond lengths. It is based on the Linus Pauling bond order concept and can be written in the form This means that the potential is written as a simple pair potential depending on the distance between two atoms , but the strength of this bond is modified by the environment of the atom via the bond order . is a function that in Tersoff-type potentials depends inversely on the number of bonds to the atom , the bond angles between sets of three atoms , and optionally on the relative bond lengths , . In case of only one atomic bond (like in a diatomic molecule), which corresponds to the strongest and shortest possible bond. The other limiting case, for increasingly many number of bonds within some interaction range, and the potential turns completely repulsive (as illustrated in the figure to the right). Alternatively, the potential energy can be written in the embedded atom model form where is the electron density at the location of atom . These two forms for the energy can be shown to be equivalent (in the special case that the bond-order function contains no angular dependence). A more detailed summary of how the bond order concept can be motivated by the second-moment ap
https://en.wikipedia.org/wiki/Cloud%20storage
Cloud storage is a model of computer data storage in which the digital data is stored in logical pools, said to be on "the cloud". The physical storage spans multiple servers (sometimes in multiple locations), and the physical environment is typically owned and managed by a hosting company. These cloud storage providers are responsible for keeping the data available and accessible, and the physical environment secured, protected, and running. People and organizations buy or lease storage capacity from the providers to store user, organization, or application data. Cloud storage services may be accessed through a colocated cloud computing service, a web service application programming interface (API) or by applications that use the API, such as cloud desktop storage, a cloud storage gateway or Web-based content management systems. History Cloud computing is believed to have been invented by J. C. R. Licklider in the 1960s with his work on ARPANET to connect people and data from anywhere at any time. In 1983, CompuServe offered its consumer users a small amount of disk space that could be used to store any files they chose to upload. In 1994, AT&T launched PersonaLink Services, an online platform for personal and business communication and entrepreneurship. The storage was one of the first to be all web-based, and referenced in their commercials as, "you can think of our electronic meeting place as the cloud." Amazon Web Services introduced their cloud storage service Amazon S3 in 2006, and has gained widespread recognition and adoption as the storage supplier to popular services such as SmugMug, Dropbox, and Pinterest. In 2005, Box announced an online file sharing and personal cloud content management service for businesses. Architecture Cloud storage is based on highly virtualized infrastructure and is like broader cloud computing in terms of interfaces, near-instant elasticity and scalability, multi-tenancy, and metered resources. Cloud storage services can b
https://en.wikipedia.org/wiki/Leslie%20Fox
Leslie Fox (30 September 1918 – 1 August 1992) was a British mathematician noted for his contribution to numerical analysis. Overview Fox studied mathematics as a scholar of Christ Church, Oxford graduating with a first in 1939 and continued to undertake research in the engineering department. While working on his D.Phil. in computational and engineering mathematics under the supervision of Sir Richard Southwell he was also engaged in highly secret war work. He worked on the numerical solution of partial differential equations at a time when numerical linear algebra was performed on a desk calculator. Computational efficiency and accuracy was thus even more important than in the days of electronic computers. Some of this work was published after the end of the Second World War jointly with his supervisor Richard Southwell. On gaining his doctorate in 1942, Fox joined the Admiralty Computing service. Following World War II in 1945, he went to work in the mathematics division of the National Physical Laboratory. He left the National Physical Laboratory in 1956 and spent a year at the University of California. In 1957 Fox took up an appointment at Oxford University where he set up the Oxford University Computing Laboratory. In 1963, Fox was appointed as Professor of Numerical Analysis at Oxford and Fellow of Balliol College, Oxford. Fox's laboratory at Oxford was one of the founding organisations of the Numerical Algorithms Group (NAG), and Fox was also a dedicated supporter of the Institute of Mathematics and its Applications (IMA). The Leslie Fox Prize for Numerical Analysis of the IMA is named in his honour. Mathematical work A detailed description of Fox's mathematical research can be found in obituaries and is summarised here. His early work with Southwell was concerned with the numerical solution of partial differential equations arising in engineering problems that, due to the complexity of their geometry, did not have analytical solutions. Southwell's gro
https://en.wikipedia.org/wiki/Tente%20%28toy%29
Tente is a line of construction toys created in 1972 by EXIN-LINES BROS S.A., a plastics and toy company based in Barcelona, Spain which ceased operation in 1993. The toys consist of multi-colored interlocking plastic bricks in multiple scales and an accompanying array of wheels, minifigures, and various accessories. Tente production has been restarted in 2021 by a group of enthusiasts starting the company iUnits (Intelligent Units). Subsequently, the trademark and patents were acquired by EDUCA BORRAS, and the toy line was discontinued. Their later series were no longer compatible with the old system, although some models remained compatible. In October 2021, the toy line was relaunched a different company, iUnits, which commercializes compatible versions of classic pieces in old and new colors, as well as new designs, including some pieces compatible with both Tente and Lego anchoring systems. Unlike the more popular Lego line of interlocking brick toys, which was a primary competitor to Tente, the Tente line emphasizes commercial and military vehicles of a variety of scales, less confined to the "minifig" scale that dominates Lego building sets. The primary physical difference with Lego bricks is that Tente brick's studs have a small central hole that allow an alternative connection method to accessory pieces. Additionally, although modeled on Lego with nearly identical brick and plate outer dimensions (including the fact that three stacked plates is equivalent in height to one brick), the studs of Tente pieces have a larger diameter than Lego pieces, resulting in them being incompatible. Hasbro marketed the toys in the United States and Japan. Some of these models are different from those offered in Europe because Exin authorized the creation of new models adapted to the tastes of the alternative markets. In the United States, Tente sets were typically found in specialty toy and model/hobby shops and not major toy retail stores. There is a small but signifi
https://en.wikipedia.org/wiki/Meusnier%27s%20theorem
In differential geometry, Meusnier's theorem states that all curves on a surface passing through a given point p and having the same tangent line at p also have the same normal curvature at p and their osculating circles form a sphere. The theorem was first announced by Jean Baptiste Meusnier in 1776, but not published until 1785. At least prior to 1912, several writers in English were in the habit of calling the result Meunier's theorem, although there is no evidence that Meusnier himself ever spelt his name in this way. This alternative spelling of Meusnier's name also appears on the Arc de Triomphe in Paris.
https://en.wikipedia.org/wiki/On-premises%20software
On-premises software (abbreviated to on-prem, and incorrectly referred to as on-premise) is installed and runs on computers on the premises of the person or organization using the software, rather than at a remote facility such as a server farm or cloud. On-premises software is sometimes referred to as "shrinkwrap" software, and off-premises software is commonly called "software as a service" ("SaaS") or "cloud computing". The software consists of database and modules that are combined to particularly serve the unique needs of the large organizations regarding the automation of corporate-wide business system and its functions. Comparison between on-premises and cloud (SaaS) Location On-premises software is established within the organisation's internal system along with the hardware and other infrastructure necessary for the software to function. Cloud-based software is usually served via internet and it can be accessed by users online regardless of the time and their location. Unlike on-premises software, cloud-based software users only need to install an application or a web browser in order to access its services. Costs needed for access to services For on-premises software, there are several costs expected to incur until the software and its services would be fully available for use. First of all, the construction of on-premises software within the organisation requires high initial costs, including costs incurred for the purchase of hardwares and other infrastructures as well as costs required for software installation and examination. In addition to this, the entity is entitled to the purchase of the license particular to the software, which involves costs and time for the preparation and required procedures. Furthermore, in order to maintain the software functionality, sustainable maintenance and operations are required and the entity will be subjected to the costs incurred for these as well. On the other hand, in general, the initial costs required for
https://en.wikipedia.org/wiki/Brocard%27s%20problem
Brocard's problem is a problem in mathematics that seeks integer values of such that is a perfect square, where is the factorial. Only three values of are known — 4, 5, 7 — and it is not known whether there are any more. More formally, it seeks pairs of integers and such thatThe problem was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan. Brown numbers Pairs of the numbers that solve Brocard's problem were named Brown numbers by Clifford A. Pickover in his 1995 book Keys to Infinity, after learning of the problem from Kevin S. Brown. As of October 2022, there are only three known pairs of Brown numbers: based on the equalities Paul Erdős conjectured that no other solutions exist. Computational searches up to one quadrillion have found no further solutions. Connection to the abc conjecture It would follow from the abc conjecture that there are only finitely many Brown numbers. More generally, it would also follow from the abc conjecture that has only finitely many solutions, for any given integer , and that has only finitely many integer solutions, for any given polynomial of degree at least 2 with integer coefficients.
https://en.wikipedia.org/wiki/Evidential%20reasoning%20approach
In decision theory, the evidential reasoning approach (ER) is a generic evidence-based multi-criteria decision analysis (MCDA) approach for dealing with problems having both quantitative and qualitative criteria under various uncertainties including ignorance and randomness. It has been used to support various decision analysis, assessment and evaluation activities such as environmental impact assessment and organizational self-assessment based on a range of quality models. Overview The evidential reasoning approach has recently been developed on the basis of decision theory in particular utility theory, artificial intelligence in particular the theory of evidence, statistical analysis and computer technology. It uses a belief structure to model an assessment with uncertainty, a belief decision matrix to represent an MCDA problem under uncertainty, evidential reasoning algorithms to aggregate criteria for generating distributed assessments, and the concepts of the belief and plausibility functions to generate a utility interval for measuring the degree of ignorance. A conventional decision matrix used for modeling an MCDA problem is a special case of a belief decision matrix. The use of belief decision matrices for MCDA problem modelling in the ER approach results in the following features: An assessment of an option can be more reliably and realistically represented by a belief decision matrix than by a conventional decision matrix. It accepts data of different formats with various types of uncertainties as inputs, such as single numerical values, probability distributions, and subjective judgments with belief degrees. It allows all available information embedded in different data formats, including qualitative and incomplete data, to be maximally incorporated in assessment and decision making processes. It allows assessment outcomes to be represented more informatively. See also Decision-making software Ordinal Priority Approach (OPA)
https://en.wikipedia.org/wiki/Personal%20NetWare
NetWare Lite and Personal NetWare are a series of discontinued peer-to-peer local area networks developed by Novell for DOS- and Windows-based personal computers aimed at personal users and small businesses in the 1990s. NetWare Lite In 1991, Novell introduced a radically different and cheaper product from their central server-based NetWare product, NetWare Lite 1.0 (NWL), codenamed "Slurpee", in answer to Artisoft's similar LANtastic. Both were peer-to-peer systems, where no dedicated server was required, but instead all PCs on the network could share their resources. Netware Lite contained a unique serial number in the EXE files that prevented running the same copy on multiple nodes within a single network. This basic copy protection was easily circumvented by comparing files from different licenses and accordingly editing the serial number bytes. The product was upgraded to NetWare Lite 1.1 and also came bundled with DR DOS 6.0. Some components of NetWare Lite were used in Novell's NetWare PalmDOS 1.0 in 1992. A Japanese version of NetWare Lite named "NetWare Lite 1.1J" existed in 1992 for four platforms (DOS/V, Fujitsu FM-R, NEC PC-98/Epson PC and Toshiba J-3100) and was supported up to 1997. Updates were distributed by Novell as DOSV6.EXE, DOSV.EXE, TSBODI.LZH. NetWare Lite 1.1 came bundled with NLSNIPES, a newer implementation of Novell's Snipes game. Personal NetWare Significantly reworked, the product line, codenamed "Smirnoff", became Personal NetWare 1.0 (PNW) in 1994. The ODI/VLM 16-bit DOS client portion of the drivers now supported individually loadable Virtual Loadable Modules (VLMs) for an improved flexibility and customizability, whereas the server portion could utilize Novell's DOS Protected Mode Services (DPMS), if loaded, to reduce its conventional memory footprint and run in extended memory and protected mode. The NetWare Lite disk cache NLCACHE had been reworked into NWCACHE, which was easier to set up and could utilize DPMS as well, ther
https://en.wikipedia.org/wiki/1%3A285%20scale
1:285 scale or 6 mm figure size is a US Army scale introduced in the late 1960s, and used for wargames and some scale model dioramas. It is used in miniature wargaming to depict large battles in a relatively small gaming area. 1:300 scale (5 mm scale) is an almost identical NATO standard scale. Both figure scales are based on the 1 mm = 1 ft calculation that reduces the average 1.72 m height of a human male to a 5.7 mm tall figure. "6 mm" is therefore used as a rounded-up reference to the scale. In 1:285 scale, a typical 20 mm base can mount approximately 3-5 infantry figures; or three strips of four figures in rank-and-file formation. 1:285/1:300 is a popular scale for micro armour games, while modern games emphasizing tanks and other vehicles have been catered to by specialist figure manufacturers such as GHQ, Heroics and Ros and Baccus Miniatures. Sci-fi and fantasy games that use these scales include BattleTech, Ogre miniatures and Epic. Other genres, such as historical periods (ancient, medieval and later periods) and medieval fantasy have miniatures made by Heroics and Ros, Baccus Miniatures and Irregular Miniatures. There are many sites of landscape creations and miniatures
https://en.wikipedia.org/wiki/Amanita%20subjunquillea
Amanita subjunquillea, also known as the East Asian death cap is a mushroom of the large genus Amanita, which occurs in East and Southeast Asia. Potentially deadly if ingested, it is closely related to the death cap A. phalloides. Initially little reported, the toxicity of A. subjunquillea has been well established; a study in Korea revealed it to have similar effects to A. phalloides, namely delayed gastrointestinal symptoms, hepatotoxicity, and a 12.5% mortality. The species killed five people out of six who ingested them in Hebei, China, in 1994. An all-white variety, Amanita subjunquillea var. alba is known from southwestern China, Japan, and Northern India. See also List of Amanita species List of deadly fungus species
https://en.wikipedia.org/wiki/Risk%20factors%20of%20schizophrenia
Risk factors of schizophrenia include many genetic and environmental phenomena. The prevailing model of schizophrenia is that of a special neurodevelopmental disorder with no precise boundary or single cause (i.e. arises from multiple mechanisms). Schizophrenia is thought to develop from very complex gene–environment interactions with vulnerability factors. The interactions of these risk factors are intricate, as numerous and diverse medical insults from conception to adulthood can be involved. The combination of genetic and environmental factors leads to deficits in the neural circuits that affect sensory input and cognitive functions. Historically, this theory has been broadly accepted but impossible to prove given ethical limitations. The first definitive proof that schizophrenia arises from multiple biological changes in the brain was recently established in human tissue grown from patient stem cells, where the complexity of disease was found to be "even more complex than currently accepted" due to cell-by-cell encoding of schizophrenia-related neuropathology. A genetic predisposition on its own, without superimposed environmental risk factors, generally does not give rise to schizophrenia. Environmental risk factors are many, and include pregnancy complications, prenatal stress and nutrition, and adverse childhood experiences. An environmental risk factor may act alone or in combination with others. Schizophrenia typically develops between the ages of 16–30 (generally males aged 16–25 years and females 25–30 years); about 75 percent of people living with the illness developed it in these age-ranges. Childhood schizophrenia (very early onset schizophrenia) develops before the age of 13 years and is quite rare (frequency is 1 in 40,000). On average there is a somewhat earlier onset for men than women, with the possible influence of the female sex hormone estrogen being one hypothesis and socio-cultural influences another. Estrogen seems to have a dampening effe
https://en.wikipedia.org/wiki/Gliding%20flight
Gliding flight is heavier-than-air flight without the use of thrust; the term volplaning also refers to this mode of flight in animals. It is employed by gliding animals and by aircraft such as gliders. This mode of flight involves flying a significant distance horizontally compared to its descent and therefore can be distinguished from a mostly straight downward descent like a round parachute. Although the human application of gliding flight usually refers to aircraft designed for this purpose, most powered aircraft are capable of gliding without engine power. As with sustained flight, gliding generally requires the application of an airfoil, such as the wings on aircraft or birds, or the gliding membrane of a gliding possum. However, gliding can be achieved with a flat (uncambered) wing, as with a simple paper plane, or even with card-throwing. However, some aircraft with lifting bodies and animals such as the flying snake can achieve gliding flight without any wings by creating a flattened surface underneath. Aircraft ("gliders") Most winged aircraft can glide to some extent, but there are several types of aircraft designed to glide: Glider, also known as a sailplane Hang glider Paraglider Speed glider Ram-air parachute Rotor kite, if untethered, known as a rotary glider, or gyroglider. Military glider Paper aeroplane Radio-controlled glider Rocket glider Wingsuit The main human application is currently recreational, though during the Second World War military gliders were used for carrying troops and equipment into battle. The types of aircraft that are used for sport and recreation are classified as gliders (sailplanes), hang gliders and paragliders. These two latter types are often foot-launched. The design of all three types enables them to repeatedly climb using rising air and then to glide before finding the next source of lift. When done in gliders (sailplanes), the sport is known as gliding and sometimes as soaring. For foot-launched airc
https://en.wikipedia.org/wiki/Magen%20Tzedek
Magen Tzedek, originally known as Hekhsher Tzedek, ( English translation Shield of Justice or Justice Certification, with variant English spellings) is a complementary certification for kosher food produced in the United States in a way that meets Jewish Halakhic (legal) standards for workers, consumers, animals, and the environment, as understood by Conservative Judaism. Magen Tzedek certification is not a kashrut certification which certifies that food is kosher in that it meets certain requirements regarding ingredients of food and technical methods of animal slaughter, but an ethical certification complementary to conventional kosher certification. Magen Tzedek was initiated by Conservative Rabbi Morris Allen and was launched in 2011. It is sponsored by the Rabbinical Assembly, the American association of Conservative rabbis, the United Synagogue of Conservative Judaism, the Central Conference of Reform Rabbis, and the Union for Reform Judaism. Magen Tzedek has met with harsh criticism from Orthodox Jewish rabbis and organizations. As of May 2013, no product bore the Magen Tzedek seal. Creation Magen Tzedek certification was initiated by Conservative Rabbi Morris Allen of Beth Jacob Congregation in Mendota Heights, Minnesota in 2007 following investigative reporting by Nathaniel Popper in The Forward regarding working conditions at the Agriprocessors kosher meat plant in Postville, Iowa. After a five-member rabbinic and lay commission visited the plant over two days and spoke with owners, senior managers and about 60 current or former workers and had reviewed reports from the Minnesota Department of Labor and Industry, Allen stated, “We weren’t able to verify everything Popper wrote, but what we did find was equally painful and filled with indignities”. In 2008, a commission was formed to develop and apply “a set of standards that would certify that kosher food manufacturers in the US operate according to Jewish ethics and social values”. On January 31, 20
https://en.wikipedia.org/wiki/Vertex%20%28geometry%29
In geometry, a vertex (: vertices or vertexes) is a point where two or more curves, lines, or edges meet or intersect. As a consequence of this definition, the point where two lines meet to form an angle and the corners of polygons and polyhedra are vertices. Definition Of an angle The vertex of an angle is the point where two rays begin or meet, where two line segments join or meet, where two lines intersect (cross), or any appropriate combination of rays, segments, and lines that result in two straight "sides" meeting at one place. Of a polytope A vertex is a corner point of a polygon, polyhedron, or other higher-dimensional polytope, formed by the intersection of edges, faces or facets of the object. In a polygon, a vertex is called "convex" if the internal angle of the polygon (i.e., the angle formed by the two edges at the vertex with the polygon inside the angle) is less than π radians (180°, two right angles); otherwise, it is called "concave" or "reflex". More generally, a vertex of a polyhedron or polytope is convex, if the intersection of the polyhedron or polytope with a sufficiently small sphere centered at the vertex is convex, and is concave otherwise. Polytope vertices are related to vertices of graphs, in that the 1-skeleton of a polytope is a graph, the vertices of which correspond to the vertices of the polytope, and in that a graph can be viewed as a 1-dimensional simplicial complex the vertices of which are the graph's vertices. However, in graph theory, vertices may have fewer than two incident edges, which is usually not allowed for geometric vertices. There is also a connection between geometric vertices and the vertices of a curve, its points of extreme curvature: in some sense the vertices of a polygon are points of infinite curvature, and if a polygon is approximated by a smooth curve, there will be a point of extreme curvature near each polygon vertex. However, a smooth curve approximation to a polygon will also have additional vert
https://en.wikipedia.org/wiki/Belief%20structure
A belief structure is a distributed assessment with beliefs. Evidential reasoning A belief structure is used in the evidential reasoning (ER) approach for multiple-criteria decision analysis (MCDA) to represent the performance of an alternative option on a criterion. In the ER approach, an MCDA problem is modelled by a belief decision matrix instead of a conventional decision matrix. The difference between the two is that, in the former, each element is a belief structure; in the latter, conversely, each element is a single value (either numerical or textual). Application For example, the quality of a car engine may be assessed to be “excellent” with a high degree of belief (e.g. 0.6) due to its low fuel consumption, low vibration and high responsiveness. At the same time, the quality may be assessed to be only “good” with a lower degree of belief (e.g. 0.4 or less) because its quietness and starting can still be improved. Such an assessment can be modeled by a belief structure: Si(engine)={(excellent, 0.6), (good, 0.4)}, where Si stands for the assessment of engine on the ith criterion (quality). In the belief structure, “excellent” and “good” are assessment standards, whilst “0.6” and “0.4” are degrees of belief.
https://en.wikipedia.org/wiki/List%20of%20participants%20in%20the%20Evolving%20Genes%20and%20Proteins%20symposium
This is a list of scientists who participated in the 1964 Evolving Genes and Proteins symposium, a landmark event in the history of molecular evolution. The symposium, supported by the National Science Foundation, took place on September 17 and September 18, 1964 at the Institute of Microbiology of Rutgers University. A summary of the proceedings was published in Science, and the full proceedings were edited by Vernon Bryson and Henry J. Vogel and published in 1965.
https://en.wikipedia.org/wiki/Third-brush%20dynamo
A third-brush dynamo was a type of dynamo, an electrical generator, formerly used for battery charging on motor vehicles. It was superseded, first by a two-brush dynamo equipped with an external voltage regulator, and later by an alternator. Construction As the name implies, the machine had three brushes in contact with the commutator. One was earthed to the frame of the vehicle and another was connected (through a reverse-current cut-out) to the live terminal of the vehicle's battery. The third was connected to the field winding of the dynamo. The other end of the field winding was connected to a switch which could be adjusted (by inserting or removing resistance) to give "low" or "high" charge. This switch was sometimes combined with the vehicle's light switch so that switching on the headlights simultaneously put the dynamo in high charge mode. Disadvantages The third-brush dynamo had the advantage of simplicity but, by modern standards, it gave poor voltage regulation. This led to short battery life as a result of over-charging or under-charging. See also Amplidyne Metadyne
https://en.wikipedia.org/wiki/Instant%20soup
Instant soup is a type of soup designed for fast and simple preparation. Some are homemade, and some are mass-produced on an industrial scale and treated in various ways to preserve them. A wide variety of types, styles and flavors of instant soups exist. Commercial instant soups are usually dried or dehydrated, canned, or treated by freezing. Types Commercial instant soups are manufactured in several types. Some consist of a packet of dry soup stock. These do not contain water, and are prepared by adding water and then heating the product for a short time, or by adding hot water directly to the dry soup mix. Instant soup can also be produced in a dry powder form, such as Unilever's Cup-a-Soup Canned (tinned) instant soups contain liquid soup that is prepared by heating their contents. Some canned soups are condensed, and require additional water to bring them to their intended strength, while others are canned in a ready-to-eat, single-strength form. Dr. John T. Dorrance, an employee with the Campbell Soup Company, invented condensed soup in 1897. Consumers sometimes use condensed soups (without diluting them), as a sauce base. Some instant liquid soups are manufactured in microwaveable containers. Additionally, some instant soups, such as Knorr's Erbswurst, are prepared in a concentrated paste form. Knorr ceased production of Erbswurst on December 31, 2018. Instant noodle soups such as Cup Noodles contain dried instant ramen noodles, dehydrated vegetable and meat products, and seasonings, and are prepared by adding hot water. Packaged instant ramen noodle soup is typically formed as a cake, and often includes a seasoning packet that is added to the noodles and water during preparation. Some also include separate packets of oil and garnishes used to season the product. Momofuku Ando, the founder of Nissin Foods, developed packaged ramen noodle soup in 1958. Varieties A multitude of instant soup varieties exist. For example, there are several Lipton and Knorr-b
https://en.wikipedia.org/wiki/Bit%20numbering
In computing, bit numbering is the convention used to identify the bit positions in a binary number. Bit significance and indexing In computing, the least significant bit (LSb) is the bit position in a binary integer representing the binary 1s place of the integer. Similarly, the most significant bit (MSb) represents the highest-order place of the binary integer. The LSb is sometimes referred to as the low-order bit or right-most bit, due to the convention in positional notation of writing less significant digits further to the right. The MSb is similarly referred to as the high-order bit or left-most bit. In both cases, the LSb and MSb correlate directly to the least significant digit and most significant digit of a decimal integer. Bit indexing correlates to the positional notation of the value in base 2. For this reason, bit index is not affected by how the value is stored on the device, such as the value's byte order. Rather, it is a property of the numeric value in binary itself. This is often utilized in programming via bit shifting: A value of 1 << n corresponds to the nth bit of a binary integer (with a value of 2n). Least significant bit in digital steganography In digital steganography, sensitive messages may be concealed by manipulating and storing information in the least significant bits of an image or a sound file. The user may later recover this information by extracting the least significant bits of the manipulated pixels to recover the original message. This allows the storage or transfer of digital information to remain concealed. Unsigned integer example This table illustrates an example of decimal value of 149 and the location of LSb. In this particular example, the position of unit value (decimal 1 or 0) is located in bit position 0 (n = 0). MSb stands for most significant bit, while LSb stands for least significant bit. Most- vs least-significant bit first The expressions most significant bit first and least significant bit at last ar
https://en.wikipedia.org/wiki/Marsaglia%20polar%20method
The Marsaglia polar method is a pseudo-random number sampling method for generating a pair of independent standard normal random variables. Standard normal random variables are frequently used in computer science, computational statistics, and in particular, in applications of the Monte Carlo method. The polar method works by choosing random points (x, y) in the square −1 < x < 1, −1 < y < 1 until and then returning the required pair of normal random variables as or, equivalently, where and represent the cosine and sine of the angle that the vector (x, y) makes with x axis. Theoretical basis The underlying theory may be summarized as follows: If u is uniformly distributed in the interval 0 ≤ u < 1, then the point (cos(2πu), sin(2πu)) is uniformly distributed on the unit circumference x2 + y2 = 1, and multiplying that point by an independent random variable ρ whose distribution is will produce a point whose coordinates are jointly distributed as two independent standard normal random variables. History This idea dates back to Laplace, whom Gauss credits with finding the above by taking the square root of The transformation to polar coordinates makes evident that θ is uniformly distributed (constant density) from 0 to 2π, and that the radial distance r has density (r2 has the appropriate chi square distribution.) This method of producing a pair of independent standard normal variates by radially projecting a random point on the unit circumference to a distance given by the square root of a chi-square-2 variate is called the polar method for generating a pair of normal random variables, Practical considerations A direct application of this idea, is called the Box–Muller transform, in which the chi variate is usually generated as but that transform requires logarithm, square root, sine and cosine functions. On some processors, the cosine and sine of the same argument can be calculated in parallel using a single instruction. Notably for Intel-based m
https://en.wikipedia.org/wiki/HMMT
HMMT is an annual high school math competition that started in 1998. The location of the tournament, in Cambridge, Massachusetts, alternates between Harvard University (November tournament) and MIT (February tournament). The contest is written and staffed almost entirely by Harvard and MIT students. Naming HMMT was initially started as the Harvard-MIT Mathematics Tournament in 1998, and is frequently still referred to as such by much of the math community. In 2019, HMMT rebranded as just HMMT to meet requirements set forth by Harvard and MIT, making it an orphan initialism. Tournament format HMMT February is attended by teams of eight students each. Teams can represent a single school, or a regional math team as large as a state. In recent years, teams have represented over 20 states, as well as Africa, Asia, Europe, and South America. HMMT February consists of three rounds: the Individual Round, the Team Round, and the Guts Round. No calculator or computational aids of any kind are allowed during the contest. Individual Round The Individual Round comprises the Algebra, Geometry, and Combinatorics exams for February, and the General and Theme exams for November. Each of the exams is 50 minutes in length and contains 10 questions. The exams are short-answer, meaning that the answers can be any real number or even an algebraic expression. Before 2012, competitors had the option to choose between a General exam or two exams in Algebra, Geometry, Combinatorics, or Calculus for the February tournament. Team Round For the February Team Round, the eight-person teams compete together on a 60-minute-long test. The Team Round is a collaborative event with proof-style problems, sometimes arranged into groups of several problems on the same theme. Thorough justifications are required for full credit. The Team Round is worth a total of 400 points, and problems are weighted according to difficulty. The event is similar to an ARML Power Round, but the problems are muc
https://en.wikipedia.org/wiki/A%20Foggy%20Day
"A Foggy Day" is a popular song composed by George Gershwin, with lyrics by Ira Gershwin. The song was introduced by Fred Astaire in the 1937 film A Damsel in Distress. It was originally titled "A Foggy Day (In London Town)" in reference to the pollution-induced pea soup fogs that were common in London during that period, and is often still referred to by the full title. The commercial recording by Astaire for Brunswick was very popular in 1937. Other recordings Frank Sinatra – Songs for Young Lovers (1953) Ella Fitzgerald on her Ella Fitzgerald Sings the George and Ira Gershwin Song Book from Verve Records, 1959. Charles Mingus – Pithecanthropus Erectus (1956) Louis Armstrong with Ella Fitzgerald – Ella and Louis (1956) Billie Holiday – Songs for Distingué Lovers (1957) Red Garland – Red Garland at the Prelude (1959) Frank Sinatra — Ring-a-Ding-Ding! (1961) Judy Garland — Judy at Carnegie Hall (1961) George Benson – It's Uptown (1965) Sarah Vaughan – Live in Japan (1973) Lyn Collins – Check Me Out if You Don't Know Me By Now (1975) Wynton Marsalis – Marsalis Standard Time, Volume 1 (1987) Tony Bennett – MTV Unplugged (1994) David Bowie – Red Hot + Rhapsody: The Gershwin Groove (1998) Michael Bublé – It's Time (2005) Willie Nelson – My Way (2018) See also List of 1930s jazz standards
https://en.wikipedia.org/wiki/Decision%20matrix
A decision matrix is a list of values in rows and columns that allows an analyst to systematically identify, analyze, and rate the performance of relationships between sets of values and information. Elements of a decision matrix show decisions based on certain decision criteria. The matrix is useful for looking at large masses of decision factors and assessing each factor's relative significance by weighting them by importance. Multiple-criteria decision analysis The term decision matrix is used to describe a multiple-criteria decision analysis (MCDA) problem. An MCDA problem, where there are M alternative options and each needs to be assessed on N criteria, can be described by the decision matrix which has N rows and M columns, or M × N elements, as shown in the following table. Each element, such as Xij, is either a single numerical value or a single grade, representing the performance of alternative i on criterion j. For example, if alternative i is "car i", criterion j is "engine quality" assessed by five grades {Exceptional, Good, Average, Below Average, Poor}, and "Car i" is assessed to be "Good" on "engine quality", then Xij = "Good". These assessments may be replaced by scores, from 1 to 5. Sums of scores may then be compared and ranked, to show the winning proposal. {| class="wikitable" |+ Example of Comparison |- ! ! Alternative 1 ! Alternative 2 ! ... ! Alternative M |- |Criterion 1 | x11 | x12 | ... | x1M |- |Criterion 2 | x21 | x22 | ... | x2M |- |... |... |... |Xij = Good |... |- |Criterion N | xN1 | xN2 | ... | xNM |- ! ! ! ! ! |- ! Sum | | | | |- ! Rank | | | | |- ! Status | | No | | No |} Belief decision matrix Similar to a decision matrix, a belief decision matrix is used to describe a multiple criteria decision analysis (MCDA) problem in the Evidential Reasoning Approach. Instead of being a single numerical value or a single grade as in a decision matrix, each element in a belief decision matrix is a belief distribution. For example, s
https://en.wikipedia.org/wiki/Digital%20polymerase%20chain%20reaction
Digital polymerase chain reaction (digital PCR, DigitalPCR, dPCR, or dePCR) is a biotechnological refinement of conventional polymerase chain reaction methods that can be used to directly quantify and clonally amplify nucleic acids strands including DNA, cDNA, or RNA. The key difference between dPCR and traditional PCR lies in the method of measuring nucleic acids amounts, with the former being a more precise method than PCR, though also more prone to error in the hands of inexperienced users. A "digital" measurement quantitatively and discretely measures a certain variable, whereas an “analog” measurement extrapolates certain measurements based on measured patterns. PCR carries out one reaction per single sample. dPCR also carries out a single reaction within a sample, however the sample is separated into a large number of partitions and the reaction is carried out in each partition individually. This separation allows a more reliable collection and sensitive measurement of nucleic acid amounts. The method has been demonstrated as useful for studying variations in gene sequences — such as copy number variants and point mutations — and it is routinely used for clonal amplification of samples for next-generation sequencing. Principles The polymerase chain reaction method is used to quantify nucleic acids by amplifying a nucleic acid molecule with the enzyme DNA polymerase. Conventional PCR is based on the theory that amplification is exponential. Therefore, nucleic acids may be quantified by comparing the number of amplification cycles and amount of PCR end-product to those of a reference sample. However, many factors complicate this calculation, creating uncertainties and inaccuracies. These factors include the following: initial amplification cycles may not be exponential; PCR amplification eventually plateaus after an uncertain number of cycles; and low initial concentrations of target nucleic acid molecules may not amplify to detectable levels. However, the mo
https://en.wikipedia.org/wiki/Bottle%20dynamo
A bottle dynamo or sidewall dynamo is a small electrical generator for bicycles employed to power a bicycle's lights. The traditional bottle dynamo (pictured) is not actually a dynamo at all (which creates DC power), but a low-power magneto that generates AC. Newer models can include a rectifier to create DC output to charge batteries for electronic devices including cellphones or GPS receivers. Named after their resemblance to bottles, these generators are also called sidewall dynamos because they operate using a roller placed on the sidewall of a bicycle tire. When the bicycle is in motion and the dynamo roller is engaged, electricity is generated as the tire spins the roller. Two other dynamo systems used on bicycles are hub dynamos and bottom bracket dynamos. Advantages over hub dynamos No extra resistance when disengaged When engaged, a dynamo requires the bicycle rider to exert more effort to maintain a given speed than would otherwise be necessary when the dynamo is not present or disengaged. Bottle dynamos can be completely disengaged when they are not in use, whereas a hub dynamo will always have added drag (though it may be so low as to be irrelevant or unnoticeable to the rider, and it is reduced significantly when lights are not being powered by the hub). Easy retrofitting A bottle dynamo may be more feasible than a hub dynamo to add to an existing bicycle, as it does not require a replacement or rebuilt wheel. Price A bottle dynamo is generally cheaper than a hub dynamo, but not always. Disadvantages over hub dynamos Slippage In wet conditions, the roller on a bottle dynamo can slip against the surface of a tire, which interrupts or reduces the amount of electricity generated. This can cause the lights to go out completely or intermittently. Hub dynamos do not need traction and are sealed from the elements. Increased resistance Bottle dynamos typically create more drag than hub dynamos. However, when they are properly adjusted, the drag may be
https://en.wikipedia.org/wiki/Prandial
Prandial relates to a meal. Postprandial (from post prandium) means after eating a meal, while preprandial is before a meal. Usages of postprandial The term postprandial is used in many contexts. Gastronomic or social Refers to activities performed after a meal, such as drinking cocktails or smoking. Medical A common use is in relation to blood sugar (or blood glucose) levels, which are normally measured 2 hours after and before eating in a postprandial glucose test. This is because blood glucose levels usually rise after a meal. The American Diabetes Association recommends a postprandial glucose level under 180 mg/dl and a preprandial plasma glucose between 70 and 130 mg/dl. Other uses of postprandial include: Postprandial dip is a mild decrease in blood sugar after eating a big meal. Postprandial hyperglycemia (PPHG) is high blood sugar following a meal. It can be evaluated in a postprandial glucose test. Postprandial hypotension is a drastic decline in blood pressure which happens after eating a meal. Postprandial regurgitation is a unique symptom of rumination syndrome. Postprandial thermogenesis is heat production due to metabolism after a meal, temporarily increasing the metabolic rate. Postprandial abdominal distension usually refers to bloating of the abdomen following a meal, especially a large one. It is generally harmless, but tends to be uncomfortable. Instances of its sudden onset or prolonged duration can, however, be symptoms of certain severe adverse gastro-intestinal conditions such as irritable bowel disease.Postprandial abdominal distension is also a documented side effect of some medications. Processes In the postprandium, there is digestion of food in the gastrointestinal tract, followed by uptake and various metabolic processes, mainly anabolic ones (building organic molecules from smaller units). The postprandium is characterized by an increased activity of the parasympathetic nervous system, putting the body in a state of "rest a
https://en.wikipedia.org/wiki/Dimension%20function
In mathematics, the notion of an (exact) dimension function (also known as a gauge function) is a tool in the study of fractals and other subsets of metric spaces. Dimension functions are a generalisation of the simple "diameter to the dimension" power law used in the construction of s-dimensional Hausdorff measure. Motivation: s-dimensional Hausdorff measure Consider a metric space (X, d) and a subset E of X. Given a number s ≥ 0, the s-dimensional Hausdorff measure of E, denoted μs(E), is defined by where μδs(E) can be thought of as an approximation to the "true" s-dimensional area/volume of E given by calculating the minimal s-dimensional area/volume of a covering of E by sets of diameter at most δ. As a function of increasing s, μs(E) is non-increasing. In fact, for all values of s, except possibly one, Hs(E) is either 0 or +∞; this exceptional value is called the Hausdorff dimension of E, here denoted dimH(E). Intuitively speaking, μs(E) = +∞ for s < dimH(E) for the same reason as the 1-dimensional linear length of a 2-dimensional disc in the Euclidean plane is +∞; likewise, μs(E) = 0 for s > dimH(E) for the same reason as the 3-dimensional volume of a disc in the Euclidean plane is zero. The idea of a dimension function is to use different functions of diameter than just diam(C)s for some s, and to look for the same property of the Hausdorff measure being finite and non-zero. Definition Let (X, d) be a metric space and E ⊆ X. Let h : [0, +∞) → [0, +∞] be a function. Define μh(E) by where Then h is called an (exact) dimension function (or gauge function) for E if μh(E) is finite and strictly positive. There are many conventions as to the properties that h should have: Rogers (1998), for example, requires that h should be monotonically increasing for t ≥ 0, strictly positive for t > 0, and continuous on the right for all t ≥ 0. Packing dimension Packing dimension is constructed in a very similar way to Hausdorff dimension, except that one "packs" E f
https://en.wikipedia.org/wiki/Extensively%20drug-resistant%20tuberculosis
Extensively drug-resistant tuberculosis (XDR-TB) is a form of tuberculosis caused by bacteria that are resistant to some of the most effective anti-TB drugs. XDR-TB strains have arisen after the mismanagement of individuals with multidrug-resistant TB (MDR-TB). Almost one in four people in the world is infected with TB bacteria. Only when the bacteria become active do people become ill with TB. Bacteria become active as a result of anything that can reduce the person's immunity, such as HIV, advancing age, or some medical conditions. TB can usually be treated with a course of four standard, or first-line, anti-TB drugs (i.e., isoniazid, rifampin and any fluoroquinolone). If these drugs are misused or mismanaged, multidrug-resistant TB (MDR-TB) can develop. MDR-TB takes longer to treat with second-line drugs (i.e., amikacin, kanamycin, or capreomycin), which are more expensive and have more side-effects. XDR-TB can develop when these second-line drugs are also misused or mismanaged and become ineffective. The World Health Organization (WHO) defines XDR-TB as MDR-TB that is resistant to at least one fluoroquinolone and a second-line injectable drug (amikacin, capreomycin, or kanamycin). XDR-TB raises concerns of a future TB epidemic with restricted treatment options, and jeopardizes the major gains made in TB control and progress on reducing TB deaths among people living with HIV/AIDS. It is therefore vital that TB control be managed properly and new tools developed to prevent, treat and diagnose the disease. The true scale of XDR-TB is unknown as many countries lack the necessary equipment and capacity to accurately diagnose it. By June 2008, 49 countries had confirmed cases of XDR-TB. By the end of 2017, 127 WHO Member States reported a total of 10,800 cases of XDR-TB, and 8.5% of cases of MDR-TB in 2017 were estimated to have been XDR-TB. In August 2019, the Food and Drug Administration (FDA) approved the use of pretomanid in combination with bedaquiline and li
https://en.wikipedia.org/wiki/Multidrug-resistant%20tuberculosis
Multidrug-resistant tuberculosis (MDR-TB) is a form of tuberculosis (TB) infection caused by bacteria that are resistant to treatment with at least two of the most powerful first-line anti-TB medications (drugs): isoniazid and rifampin. Some forms of TB are also resistant to second-line medications, and are called extensively drug-resistant TB (XDR-TB). Tuberculosis is caused by infection with the bacterium Mycobacterium tuberculosis. Almost one in four people in the world are infected with TB bacteria. Only when the bacteria become active do people become ill with TB. Bacteria become active as a result of anything that can reduce the person's immunity, such as HIV, advancing age, diabetes or other immunocompromising illnesses. TB can usually be treated with a course of four standard, or first-line, anti-TB drugs (i.e., isoniazid, rifampin, pyrazinamide and ethambutol). However, beginning with the first antibiotic treatment for TB in 1943, some strains of the TB bacteria developed resistance to the standard drugs through genetic changes (see mechanisms.) Currently the majority of multidrug-resistant cases of TB are due to one strain of TB bacteria called the Beijing lineage. This process accelerates if incorrect or inadequate treatments are used, leading to the development and spread of multidrug-resistant TB (MDR-TB). Incorrect or inadequate treatment may be due to use of the wrong medications, use of only one medication (standard treatment is at least two drugs), or not taking medication consistently or for the full treatment period (treatment is required for several months). Treatment of MDR-TB requires second-line drugs (i.e., fluoroquinolones, aminoglycosides, and others), which in general are less effective, more toxic and much more expensive than first-line drugs. Treatment schedules for MDR-TB involving fluoroquinolones and aminoglycosides can run for two years, compared to the six months of first-line drug treatment, and cost over US$100,000. If these sec
https://en.wikipedia.org/wiki/Rensch%27s%20rule
Rensch's rule is a biological rule on allometrics, concerning the relationship between the extent of sexual size dimorphism and which sex is larger. Across species within a lineage, size dimorphism increases with increasing body size when the male is the larger sex, and decreases with increasing average body size when the female is the larger sex. The rule was proposed by the evolutionary biologist Bernhard Rensch in 1950. After controlling for confounding factors such as evolutionary history, an increase in average body size makes the difference in body size larger if the species has larger males, and smaller if it has larger females. Some studies propose that this is due to sexual bimaturism, which causes male traits to diverge faster and develop for a longer period of time. The correlation between sexual size dimorphism and body size is hypothesized to be a result of an increase in male-male competition in larger species, a result of limited environmental resources, fuelling aggression between males over access to breeding territories and mating partners. Phylogenetic lineages that appear to follow this rule include primates, pinnipeds, and artiodactyls. This rule has rarely been tested on parasites. A 2019 study showed that ectoparasitic philopterid and menoponid lice comply with it, while ricinid lice exhibit a reversed pattern.
https://en.wikipedia.org/wiki/Four-phase%20logic
Four-phase logic is a type of, and design methodology for dynamic logic. It enabled non-specialist engineers to design quite complex ICs, using either PMOS or NMOS processes. It uses a kind of 4-phase clock signal. History R. K. "Bob" Booher, an engineer at Autonetics, invented four-phase logic and communicated the idea to Frank Wanlass at Fairchild Semiconductor; Wanlass promoted this logic form at General Instrument Microelectronics Division. Booher made the first working four-phase chip, the Autonetics DDA integrator, during February 1966; he later designed several chips for and built the Autonetics D200 airborne computer using this technique. In April 1967, Joel Karp and Elizabeth de Atley published an article, "Use four-phase MOS IC logic" in Electronic Design magazine. In the same year, Cohen, Rubenstein, and Wanlass published "MTOS four phase clock systems." Wanlass had been director of research and engineering at General Instrument Microelectronics Division in New York since leaving Fairchild Semiconductor in 1964. Lee Boysel, a disciple of Wanlass and a designer at Fairchild Semiconductor, and later founder of Four-Phase Systems, gave a "late news" talk on a four-phase 8-bit adder device in October 1967 at the International Electron Devices meeting. J. L. Seely, manager of MOS Operations at General Instrument Microelectronics Division, also wrote about four-phase logic in late 1967. In 1968 Boysel published an article "Adder on a Chip: LSI Helps Reduce Cost of Small Machine" in Electronics magazine; Four-phase papers from Y. T. Yen also appeared that year. Other papers followed shortly. Boysel recalls that four-phase dynamic logic allowed him to achieve 10X the packing density, 10X the speed, and 1/10 the power compared to other MOS techniques being used at the time (metal-gate saturated-load PMOS logic), using the first-generation MOS process at Fairchild. Structure There are two types of logic gates – a '1' gate and a '3' gate. These differ onl
https://en.wikipedia.org/wiki/Focal%20fatty%20liver
Focal fatty liver (FFL) is localised or patchy process of lipid accumulation in the liver. It is likely to have different pathogenesis than non-alcoholic steatohepatitis which is a diffuse process. FFL may result from altered venous flow to liver, tissue hypoxia and malabsorption of lipoproteins. The condition has been increasingly recognised as sensitivity of abdominal imaging studies continues to improve. A fine needle biopsy is often performed to differentiate it from malignancy.
https://en.wikipedia.org/wiki/Extraterrestrial%20liquid%20water
Extraterrestrial liquid water () is water in its liquid state that naturally occurs outside Earth. It is a subject of wide interest because it is recognized as one of the key prerequisites for life as we know it and thus surmised as essential for extraterrestrial life. Although many celestial bodies in the Solar System have a hydrosphere, Earth is the only celestial body known to have stable bodies of liquid water on its surface, with oceanic water covering 71% of its surface, which is essential to life on Earth. The presence of liquid water is maintained by Earth's atmospheric pressure and stable orbit in the Sun's circumstellar habitable zone, however, the origin of Earth's water remains uncertain. The main methods currently used for confirmation are absorption spectroscopy and geochemistry. These techniques have proven effective for atmospheric water vapour and ice. However, using current methods of astronomical spectroscopy it is substantially more difficult to detect liquid water on terrestrial planets, especially in the case of subsurface water. Due to this, astronomers, astrobiologists and planetary scientists use habitable zone, gravitational and tidal theory, models of planetary differentiation and radiometry to determine the potential for liquid water. Water observed in volcanic activity can provide more compelling indirect evidence, as can fluvial features and the presence of antifreeze agents, such as salts or ammonia. Using such methods, many scientists infer that liquid water once covered large areas of Mars and Venus. Water is thought to exist as liquid beneath the surface of some planetary bodies, similar to groundwater on Earth. Water vapour is sometimes considered conclusive evidence for the presence of liquid water, although atmospheric water vapour may be found to exist in many places where liquid water does not. Similar indirect evidence, however, supports the existence of liquids below the surface of several moons and dwarf planets elsewhere
https://en.wikipedia.org/wiki/Acoustic%20model
An acoustic model is used in automatic speech recognition to represent the relationship between an audio signal and the phonemes or other linguistic units that make up speech. The model is learned from a set of audio recordings and their corresponding transcripts. It is created by taking audio recordings of speech, and their text transcriptions, and using software to create statistical representations of the sounds that make up each word. Background Modern speech recognition systems use both an acoustic model and a language model to represent the statistical properties of speech. The acoustic model models the relationship between the audio signal and the phonetic units in the language. The language model is responsible for modeling the word sequences in the language. These two models are combined to get the top-ranked word sequences corresponding to a given audio segment. Most modern speech recognition systems operate on the audio in small chunks known as frames with an approximate duration of 10ms per frame. The raw audio signal from each frame can be transformed by applying the mel-frequency cepstrum. The coefficients from this transformation are commonly known as mel frequency cepstral coefficients (MFCC)s and are used as an input to the acoustic model along with other features. Recently, the use of Convolutional Neural Networks has led to big improvements in acoustic modeling. Speech audio characteristics Audio can be encoded at different sampling rates (i.e. samples per second – the most common being: 8, 16, 32, 44.1, 48, and 96 kHz), and different bits per sample (the most common being: 8-bits, 16-bits, 24-bits or 32-bits). Speech recognition engines work best if the acoustic model they use was trained with speech audio which was recorded at the same sampling rate/bits per sample as the speech being recognized. Telephony-based speech recognition The limiting factor for telephony based speech recognition is the bandwidth at which speech can be transm
https://en.wikipedia.org/wiki/Speech%20corpus
A speech corpus (or spoken corpus) is a database of speech audio files and text transcriptions. In speech technology, speech corpora are used, among other things, to create acoustic models (which can then be used with a speech recognition or speaker identification engine). In linguistics, spoken corpora are used to do research into phonetic, conversation analysis, dialectology and other fields. A corpus is one such database. Corpora is the plural of corpus (i.e. it is many such databases). There are two types of Speech Corpora: Read Speech – which includes: Book excerpts Broadcast news Lists of words Sequences of numbers Spontaneous Speech – which includes: Dialogs – between two or more people (includes meetings; one such corpus is the KEC); Narratives – a person telling a story (one such corpus is the Buckeye Corpus); Map-tasks – one person explains a route on a map to another; Appointment-tasks – two people try to find a common meeting time based on individual schedules. A special kind of speech corpora are non-native speech databases that contain speech with foreign accent. See also Arabic Speech Corpus Common Voice EXMARaLDA Lingua Libre, an online libre tool List of children's speech corpora Non-native speech database Praat Spoken English Corpus The BABEL Speech Corpus TIMIT Transcriber Transcription (linguistics)
https://en.wikipedia.org/wiki/Domino%20logic
Domino logic is a CMOS-based evolution of the dynamic logic techniques based on either PMOS or NMOS transistors. It allows a rail-to-rail logic swing. It was developed to speed up circuits, solving the premature cascade problem, typically by inserting small and fast pFETs between domino stages to constrain the interstage cascade velocity to a curtailed maximum—a curtailed deterministic maximum—without requiring other circuit design interlocks. Terminology The term derives from the fact that in domino logic (cascade structure consisting of several stages), each stage ripples the next stage for evaluation, similar to dominoes falling one after the other. Dynamic logic drawbacks In dynamic logic, a problem arises when cascading one gate to the next. The precharge "1" state of the first gate may cause the second gate to discharge prematurely, before the first gate has reached its correct state. This uses up the "precharge" of the second gate, which cannot be restored until the next clock cycle, so there is no recovery from this error. In order to cascade dynamic logic gates, one solution is domino logic, which inserts an ordinary static inverter between stages. While this might seem to defeat the point of dynamic logic, since the inverter has a pFET (one of the main goals of dynamic logic is to avoid pFETs where possible, due to speed), there are two reasons it works well. First, there is no fan-out to multiple pFETs; the dynamic gate connects to exactly one inverter, so the gate is still very fast. Furthermore, since the inverter connects to only nFETs in dynamic logic gates, it too is very fast. Second, the pFET in an inverter can be made smaller than in some types of logic gates. In domino logic cascade structure of several stages, the evaluation of each stage ripples the next stage evaluation, similar to dominoes falling one after the other. Once fallen, the node states cannot return to "1" (until the next clock cycle) just as dominoes, once fallen, cannot s
https://en.wikipedia.org/wiki/Bismuth%20sulfite%20agar
Bismuth sulfite agar is a type of agar media used to isolate Salmonella species. It uses glucose as a primary source of carbon. BLBG and bismuth stop gram-positive growth. Bismuth sulfite agar tests the ability to use ferrous sulfate and convert it to hydrogen sulfide. Bismuth sulfite agar typically contains (w/v): 1.6% bismuth sulfite Bi2(SO3)3 1.0% pancreatic digest of casein 1.0% pancreatic digest of animal tissue 1.0% beef extract 1.0% glucose 0.8% dibasic sodium phosphate 0.06% ferrous sulfate • 7 water pH adjusted to 7.7 at 25 °C This medium is boiled for sterility, not autoclaved.
https://en.wikipedia.org/wiki/Hoyle%27s%20agar
Hoyle's agar is a selective medium that uses tellurite to differentially select Corynebacterium diphtheriae from other upper respiratory tract flora. The medium appears cream to yellow colored, and takes the form of a free-floating powder. It is a modification of Neill's medium. Hoyle's tellurite agar contains: The medium inhibits growth of Gram-negative bacteria and many Gram-positive bacteria, and reduction of the tellurite is characteristic of corynebacteria (though not entirely exclusive to them.) Microscopic examination of samples of suspected colonies, using Neisser differential staining, is required for confirmation.
https://en.wikipedia.org/wiki/Ipomoea%20indica
Ipomoea indica is a species of flowering plant in the family Convolvulaceae, known by several common names, including blue morning glory, oceanblue morning glory, koali awa, and blue dawn flower. It bears heart-shaped or 3-lobed leaves and purple or blue funnel-shaped flowers in diameter, from spring to autumn. The flowers produced by the plant are hermaphroditic. This plant has gained the Royal Horticultural Society's Award of Garden Merit. The plant is grown as an ornamental for its attractive flowers, but it is invasive in many regions of the world and is specifically listed on New Zealand's Biosecurity Act 1993. Etymology The Latin specific epithet indica means from India, or the East Indies or China. In this case, the name likely refers to the West Indies, as I. indica is native to the New World. Description Ipomoea indica is a vigorous, long-lived, tender, perennial plant, a vine which is native to tropical, subtropical and warm temperate habitats throughout the world. They can most commonly be found in disturbed forests, forest edges, secondary woodland, suburban gullies, and along roadsides and waterways. The plant climbs well over other plants, walls and slopes as growing on the bottom. Its climbing habit allows it to compete with trees and shrubs successfully. It is a twisting, occasionally lying, herbaceous plant which is more or less densely hairy on the axial parts with backward-looking trichomes. The stems can grow long and sometimes have roots at the nodes. The leaves are petiolate with long petioles. The leaf blade is egg-shaped or round, long and wide. The underside is densely hairy with short, soft trichomes, the top is more or less sparsely hairy. The base is heart-shaped, the leaf margin is entire or three-lobed, the tip is pointed or sharply pointed. The crown is funnel-shaped, long, glabrous, bright blue or bluish purple, with age they become reddish purple or red. The centre of the crown is a little paler. I. indica is a long-liv
https://en.wikipedia.org/wiki/Raising%20and%20lowering%20indices
In mathematics and mathematical physics, raising and lowering indices are operations on tensors which change their type. Raising and lowering indices are a form of index manipulation in tensor expressions. Vectors, covectors and the metric Mathematical formulation Mathematically vectors are elements of a vector space over a field , and for use in physics is usually defined with or . Concretely, if the dimension of is finite, then, after making a choice of basis, we can view such vector spaces as or . The dual space is the space of linear functionals mapping . Concretely, in matrix notation these can be thought of as row vectors, which give a number when applied to column vectors. We denote this by , so that is a linear map . Then under a choice of basis , we can view vectors as an vector with components (vectors are taken by convention to have indices up). This picks out a choice of basis for , defined by the set of relations . For applications, raising and lowering is done using a structure known as the (pseudo-)metric tensor (the 'pseudo-' refers to the fact we allow the metric to be indefinite). Formally, this is a non-degenerate, symmetric bilinear form In this basis, it has components , and can be viewed as a symmetric matrix in with these components. The inverse metric exists due to non-degeneracy and is denoted , and as a matrix is the inverse to . Raising and lowering vectors and covectors Raising and lowering is then done in coordinates. Given a vector with components , we can contract with the metric to obtain a covector: and this is what we mean by lowering the index. Conversely, contracting a covector with the inverse metric gives a vector: This process is called raising the index. Raising and then lowering the same index (or conversely) are inverse operations, which is reflected in the metric and inverse metric tensors being inverse to each other (as is suggested by the terminology): where is the Kronecker delta or identity
https://en.wikipedia.org/wiki/Surface%20probability
In immunology, surface probability is the amount of reflection of an antigen's secondary or tertiary structure to the outside of the molecule. A greater surface probability means that an antigen is more likely to be immunogenic (i.e. induce the formation of antibodies).
https://en.wikipedia.org/wiki/Positive%20current
In mathematics, more particularly in complex geometry, algebraic geometry and complex analysis, a positive current is a positive (n-p,n-p)-form over an n-dimensional complex manifold, taking values in distributions. For a formal definition, consider a manifold M. Currents on M are (by definition) differential forms with coefficients in distributions; integrating over M, we may consider currents as "currents of integration", that is, functionals on smooth forms with compact support. This way, currents are considered as elements in the dual space to the space of forms with compact support. Now, let M be a complex manifold. The Hodge decomposition is defined on currents, in a natural way, the (p,q)-currents being functionals on . A positive current is defined as a real current of Hodge type (p,p), taking non-negative values on all positive (p,p)-forms. Characterization of Kähler manifolds Using the Hahn–Banach theorem, Harvey and Lawson proved the following criterion of existence of Kähler metrics. Theorem: Let M be a compact complex manifold. Then M does not admit a Kähler structure if and only if M admits a non-zero positive (1,1)-current which is a (1,1)-part of an exact 2-current. Note that the de Rham differential maps 3-currents to 2-currents, hence is a differential of a 3-current; if is a current of integration of a complex curve, this means that this curve is a (1,1)-part of a boundary. When M admits a surjective map to a Kähler manifold with 1-dimensional fibers, this theorem leads to the following result of complex algebraic geometry. Corollary: In this situation, M is non-Kähler if and only if the homology class of a generic fiber of is a (1,1)-part of a boundary. Notes
https://en.wikipedia.org/wiki/Hermann%20Haken
Hermann Haken (born 12 July 1927) is physicist and professor emeritus in theoretical physics at the University of Stuttgart. He is known as the founder of synergetics and one of the "fathers" of quantum-mechanical laser theory. He is a cousin of the mathematician Wolfgang Haken, who proved the Four color theorem. He is a nephew of Werner Haken, a doctoral student of Max Planck. Biography After his studies in mathematics and physics in Halle (Saale) and Erlangen, receiving his PhD in mathematics in 1951 at the University of Erlangen and being guest lecturer at universities in the UK and US, Haken was appointed as a full professor in theoretical physics at the University of Stuttgart. His research has been in non linear optics (his specialities are laser physics, particle physics, statistical physics and group theory). Haken developed his institute in a relatively short time to be an international centre for laser theory, starting in 1960 when Theodore Maiman built the first experimental laser. The interpretation of the laser principles as self-organization of non equilibrium systems paved the way at the end of the 1960s to the development of synergetics, of which Haken is recognized as the founder. Haken is the author of some 23 textbooks and monographs that cover an impressive number of topics from laser physics, atomic physics, quantum field theory, to synergetics. Although Haken's early books tend to be rather mathematical, at least one of his books Light is nicely written, for the more general reader, and loaded with physical insights. One of his successful popular books is Erfolgsgeheimnis der Natur, or in English, The Science of Structure: Synergetics. Haken also showed interest in Grey system theory. For his wide range of contributions, he received many international prizes or medals, including the Max Born Medal and Prize by the British Institute of Physics and the German Physical Society in 1976, Albert A. Michelson Medal of the Franklin Institute, Phi
https://en.wikipedia.org/wiki/Goormaghtigh%20conjecture
In mathematics, the Goormaghtigh conjecture is a conjecture in number theory named for the Belgian mathematician René Goormaghtigh. The conjecture is that the only non-trivial integer solutions of the exponential Diophantine equation satisfying and are and Partial results showed that, for each pair of fixed exponents and , this equation has only finitely many solutions. But this proof depends on Siegel's finiteness theorem, which is ineffective. showed that, if and with , , and , then is bounded by an effectively computable constant depending only on and . showed that for and odd , this equation has no solution other than the two solutions given above. Balasubramanian and Shorey proved in 1980 that there are only finitely many possible solutions to the equations with prime divisors of and lying in a given finite set and that they may be effectively computed. showed that, for each fixed and , this equation has at most one solution. For fixed x (or y), equation has at most 15 solutions, and at most two unless x is either odd prime power times a power of two, or in the finite set {15, 21, 30, 33, 35, 39, 45, 51, 65, 85, 143, 154, 713}, in which case there are at most three solutions. Furthermore, there is at most one solution if the odd part of n is squareful unless n has at most two distinct odd prime factors or n is in a finite set {315, 495, 525, 585, 630, 693, 735, 765, 855, 945, 1035, 1050, 1170, 1260, 1386, 1530, 1890, 1925, 1950, 1953, 2115, 2175, 2223, 2325, 2535, 2565, 2898, 2907, 3105, 3150, 3325, 3465, 3663, 3675, 4235, 5525, 5661, 6273, 8109, 17575, 39151}. Application to repunits The Goormaghtigh conjecture may be expressed as saying that 31 (111 in base 5, 11111 in base 2) and 8191 (111 in base 90, 1111111111111 in base 2) are the only two numbers that are repunits with at least 3 digits in two different bases. See also Feit–Thompson conjecture
https://en.wikipedia.org/wiki/TIRAP
TIRAP (TIR domain containing adaptor protein) is an adapter molecule associated with toll-like receptors. The innate immune system recognizes microbial pathogens through Toll-like receptors (TLRs), which identify pathogen-associated molecular patterns. Different TLRs recognize different pathogen-associated molecular patterns and all TLRs have a Toll-interleukin 1 receptor (TIR) domain, which is responsible for signal transduction. The protein encoded by this gene is a TIR adaptor protein involved in the TLR4 signaling pathway of the immune system. It activates NF-kappa-B, MAPK1, MAPK3 and JNK, which then results in cytokine secretion and the inflammatory response. Alternative splicing of this gene results in several transcript variants; however, not all variants have been fully described. See also Myd88 External links
https://en.wikipedia.org/wiki/Busemann%27s%20theorem
In mathematics, Busemann's theorem is a theorem in Euclidean geometry and geometric tomography. It was first proved by Herbert Busemann in 1949 and was motivated by his theory of area in Finsler spaces. Statement of the theorem Let K be a convex body in n-dimensional Euclidean space Rn containing the origin in its interior. Let S be an (n − 2)-dimensional linear subspace of Rn. For each unit vector θ in S⊥, the orthogonal complement of S, let Sθ denote the (n − 1)-dimensional hyperplane containing θ and S. Define r(θ) to be the (n − 1)-dimensional volume of K ∩ Sθ. Let C be the curve {θr(θ)} in S⊥. Then C forms the boundary of a convex body in S⊥. See also Brunn–Minkowski inequality Prékopa–Leindler inequality
https://en.wikipedia.org/wiki/Vitale%27s%20random%20Brunn%E2%80%93Minkowski%20inequality
In mathematics, Vitale's random Brunn–Minkowski inequality is a theorem due to Richard Vitale that generalizes the classical Brunn–Minkowski inequality for compact subsets of n-dimensional Euclidean space Rn to random compact sets. Statement of the inequality Let X be a random compact set in Rn; that is, a Borel–measurable function from some probability space (Ω, Σ, Pr) to the space of non-empty, compact subsets of Rn equipped with the Hausdorff metric. A random vector V : Ω → Rn is called a selection of X if Pr(V ∈ X) = 1. If K is a non-empty, compact subset of Rn, let and define the set-valued expectation E[X] of X to be Note that E[X] is a subset of Rn. In this notation, Vitale's random Brunn–Minkowski inequality is that, for any random compact set X with , where "" denotes n-dimensional Lebesgue measure. Relationship to the Brunn–Minkowski inequality If X takes the values (non-empty, compact sets) K and L with probabilities 1 − λ and λ respectively, then Vitale's random Brunn–Minkowski inequality is simply the original Brunn–Minkowski inequality for compact sets.
https://en.wikipedia.org/wiki/Synthetic%20genomics
Synthetic genomics is a nascent field of synthetic biology that uses aspects of genetic modification on pre-existing life forms, or artificial gene synthesis to create new DNA or entire lifeforms. Overview Synthetic genomics is unlike genetic modification in the sense that it does not use naturally occurring genes in its life forms. It may make use of custom designed base pair series, though in a more expanded and presently unrealized sense synthetic genomics could utilize genetic codes that are not composed of the two base pairs of DNA that are currently used by life. The development of synthetic genomics is related to certain recent technical abilities and technologies in the field of genetics. The ability to construct long base pair chains cheaply and accurately on a large scale has allowed researchers to perform experiments on genomes that do not exist in nature. Coupled with the developments in protein folding models and decreasing computational costs the field of synthetic genomics is beginning to enter a productive stage of vitality. History Researchers were able to create a synthetic organism for the first time in 2010. This breakthrough was undertaken by Synthetic Genomics, Inc., which continues to specialize in the research and commercialization of custom designed genomes. It was accomplished by synthesizing a 600 kbp genome (resembling that of Mycoplasma genitalium, save the insertion of a few watermarks) via the Gibson Assembly method and Transformation Associated Recombination. Recombinant DNA technology Soon after the discovery of restriction endonucleases and ligases, the field of genetics began using these molecular tools to assemble artificial sequences from smaller fragments of synthetic or naturally-occurring DNA. The advantage in using the recombinatory approach as opposed to continual DNA synthesis stems from the inverse relationship that exists between synthetic DNA length and percent purity of that synthetic length. In other words, as you
https://en.wikipedia.org/wiki/N%286%29-Carboxymethyllysine
N(6)-Carboxymethyllysine (CML), also known as Nε-(carboxymethyl)lysine, is an advanced glycation endproduct (AGE). CML has been the most used marker for AGEs in food analysis. Recently, it has been demonstrated that gut microbiota mediates an aging-associated decline in gut barrier function, allowing AGEs to leak into the bloodstream from the gut and impairing microglial function in the brain. It is suggested that the amount of CML in human blood samples may correlated with age. A humanized monoclonal antibody which binds to N6 – carboxymethyl lysine shows considerable promise as a possible therapeutic agent for treating pancreatic cancer.
https://en.wikipedia.org/wiki/Vertex%20%28anatomy%29
In arthropod and vertebrate anatomy, the vertex (or cranial vertex) is the highest point of the head. In humans, the vertex is formed by four bones of the skull: the frontal bone, the two parietal bones, and the occipital bone. These bones are connected by the coronal suture between the frontal and parietal bones, the sagittal suture between the two parietal bones, and the lambdoid suture between the parietal and occipital bones. Vertex baldness refers to a form of male pattern baldness in which the baldness is limited to the vertex, resembling a tonsure. In childbirth, vertex birth refers to the common head-first presentation of the baby, as opposed to the buttocks-first position of a breech birth. In entomology, the color and shape of an insect's vertex and the structures arising from it are commonly used in identifying species. See also Calvaria (skull) Crown (anatomy) Male pattern baldness
https://en.wikipedia.org/wiki/Biosensors%20International
Biosensors International Group is a medical device company that specializes in developing, manufacturing and licensing technologies for use in interventional cardiology procedures and critical care. The company was listed in the Mainboard of the Singapore Exchange (SGX) in May 2005. The global headquarters of the company are located in Singapore, where the main manufacturing facilities and R&D centers are hosted. The European headquarters are in Morges, Switzerland; this Swiss office is also the Legal Manufacturer of BioMatrix, the current leading product of the company. BioMatrix is a drug-eluting stent that utilizes proprietary technologies of Biosensors: a biodegradable Poly-Lactic Acid (PLA) polymer (PLA), which degrades into the naturally occurring lactic acid the Biolimus A9 drug, a highly lipophilic derivative of sirolimus the S-Stent stent platform an automated stent coating technology that directly deposits the coating onto the stent surface. Biosensors has obtained CE Mark for this drug-eluting stent product in January 2008. To enter the China market, Biosensors has set up a joint-venture with Hong Kong listed Shandong Weigao to market and distribute coronary stents in China. As one of the few companies with proprietary drug-eluting stent technology, Biosensors also has been obtaining revenue through licensing its technologies to other medical device companies like Terumo, and specialty-stent providers like Devax, Inc. and Xtent, Inc. See also Drug-eluting stent Biolimus A9 Notes External links Biosensors International Website Biosensors Health care companies of Singapore Companies listed on the Singapore Exchange Singaporean brands
https://en.wikipedia.org/wiki/H%20tree
In fractal geometry, the H tree is a fractal tree structure constructed from perpendicular line segments, each smaller by a factor of the square root of 2 from the next larger adjacent segment. It is so called because its repeating pattern resembles the letter "H". It has Hausdorff dimension 2, and comes arbitrarily close to every point in a rectangle. Its applications include VLSI design and microwave engineering. Construction An H tree can be constructed by starting with a line segment of arbitrary length, drawing two shorter segments at right angles to the first through its endpoints, and continuing in the same vein, reducing (dividing) the length of the line segments drawn at each stage by . A variant of this construction could also be defined in which the length at each iteration is multiplied by a ratio less than , but for this variant the resulting shape covers only part of its bounding rectangle, with a fractal boundary. An alternative process that generates the same fractal set is to begin with a rectangle with sides in the ratio , and repeatedly bisect it into two smaller silver rectangles, at each stage connecting the two centroids of the two smaller rectangles by a line segment. A similar process can be performed with rectangles of any other shape, but the rectangle leads to the line segment size decreasing uniformly by a factor at each step while for other rectangles the length will decrease by different factors at odd and even levels of the recursive construction. Properties The H tree is a self-similar fractal; its Hausdorff dimension is equal to 2. The points of the H tree come arbitrarily close to every point in a rectangle (the same as the starting rectangle in the constructing by centroids of subdivided rectangles). However, it does not include all points of the rectangle; for instance, the points on the perpendicular bisector of the initial line segment (other than the midpoint of this segment) are not included. Applications In VLSI design
https://en.wikipedia.org/wiki/Inclusion%20%28cell%29
In cellular biology, inclusions are diverse intracellular non-living substances (ergastic substances) that are not bound by membranes. Inclusions are stored nutrients/deutoplasmic substances, secretory products, and pigment granules. Examples of inclusions are glycogen granules in the liver and muscle cells, lipid droplets in fat cells, pigment granules in certain cells of skin and hair, and crystals of various types. Cytoplasmic inclusions are an example of a biomolecular condensate arising by liquid-solid, liquid-gel or liquid-liquid phase separation. These structures were first observed by O. F. Müller in 1786. Examples Glycogen: Glycogen is the most common form of glucose in animals and is especially abundant in cells of muscles, and liver. It appears in electron micrograph as clusters, or a rosette of beta particles that resemble ribosomes, located near the smooth endoplasmic reticulum. Glycogen is an important energy source of the cell; therefore, it will be available on demand. The enzymes responsible for glycogenolysis degrade glycogen into individual molecules of glucose and can be utilized by multiple organs of the body. Lipids: Lipids are triglycerides in storage form is the common form of inclusions, not only are stored in specialized cells (adipocytes) but also are located as individuals droplets in various cell type especially hepatocytes. These are fluid at body temperature and appear in living cells as refractile spherical droplets. Lipid yields more than twice as many calories per gram as does carbohydrate. On demand, they serve as a local store of energy and a potential source of short carbon chains that are used by the cell in its synthesis of membranes and other lipid containing structural components or secretory products. Crystals: Crystalline inclusions have long been recognized as normal constituents of certain cell types such as Sertoli cells and Leydig cells of the human testis, and occasionally in macrophages. It is believed that th
https://en.wikipedia.org/wiki/Ecological%20vegetation%20class
An ecological vegetation class (EVC) is a component of the vegetation classification system developed and used by the state of Victoria, Australia, since 1994, for mapping floristic biodiversity. Ecological vegetation classes are groupings of vegetation communities based on floristic, structural, and ecological features. The Victorian Department of Environment, Land, Water and Planning has defined all of the EVCs within Victoria. An EVC consists of one or a number of floristic communities that appear to be associated with a recognisable environmental niche, and which can be characterised by a number of their adaptive responses to ecological processes that operate at the landscape scale level. Each ecological vegetation class is described through a combination of its floristic, life-form, and reproductive strategy profiles, and through an inferred fidelity to particular environmental attributes. Although there are more than 300 individual EVCs, some can be grouped together to form a bioregion, which is a geographical approach to classifying the environment using a climate, geomorphology, geology, soils and vegetation. There are 28 bioregions across Victoria. Each EVC within a biogregion can be assigned a conservation status , to indicate its degree of alteration since European settlement in Australia. To assist with the assessment of an EVC within a bioregion, benchmarks have been established to ensure that assessments are carried out in a standard fashion across Victoria. Development Ecological vegetation classes developed from earlier approaches to the mapping of floristic communities. An example in Victoria of such earlier mapping was that done for the Otway Forest Management Area by Brinkman and Farell in 1990. This represented a break from previous floristic mapping, which was based on structural vegetation units, which in turn were derived from assessing height, density and species composition of the canopy. A "structural vegetation unit" approach had been u
https://en.wikipedia.org/wiki/How%20Doctors%20Think
How Doctors Think is a book released in March 2007 by Jerome Groopman, the Dina and Raphael Recanati Chair of Medicine at Harvard Medical School, chief of experimental medicine at Beth Israel Deaconess Medical Center in Boston, and staff writer for The New Yorker magazine. The book opens with a discussion of a woman in her thirties who suffered daily stomach cramps and serious weight loss, and who visited some 30 doctors over a period of 15 years. Several misdiagnoses were made before she was finally found to have celiac disease. Groopman explains that no one can expect a physician to be infallible, as medicine is an uncertain science, and every doctor sometimes makes mistakes in diagnosis and treatment. But the frequency and seriousness of those mistakes can be reduced by "understanding how a doctor thinks and how he or she can think better". The book includes Groopman's own experiences both as an oncologist and as a patient, as well as interviews by Groopman of prominent physicians in the medical community. Notably, he describes his difficulties with a number of orthopedic surgeons as he sought treatment for a debilitating ligament laxity he developed in his right hand, which over several years had led to the formation of cysts in the bones of his wrist. Salem's challenge Groopman spends a great deal of the book discussing the challenge posed to him by Dr. Deeb Salem, chairman of the Department of Internal Medicine at Tufts-New England Medical Center, during a presentation the author made at their hospital grand rounds. During the presentation, Groopman was discussing the importance of compassion and communication in providing medical care when Salem posed the following question: At the time of the presentation, Groopman was unable to provide a satisfactory response. Salem's question reminded Groopman of his experiences with physicians at the Phillips House of the world-renowned Massachusetts General Hospital, where he trained as a resident in the 1970s. Per
https://en.wikipedia.org/wiki/Enterprise%20Integration%20Patterns
Enterprise Integration Patterns is a book by Gregor Hohpe and Bobby Woolf and describes 65 patterns for the use of enterprise application integration and message-oriented middleware in the form of a pattern language. The integration (messaging) pattern language The pattern language presented in the book consists of 65 patterns structured into 9 categories, which largely follow the flow of a message from one system to the next through channels, routing, and transformations. The book includes an icon-based pattern language, sometimes nicknamed "GregorGrams" after one of the authors. Excerpts from the book (short pattern descriptions) are available on the supporting website (see External links). Integration styles and types The book distinguishes four top-level alternatives for integration: File Transfer Shared Database Remote Procedure Invocation Messaging The following integration types are introduced: Information Portal Data Replication Shared Business Function Service Oriented Architecture Distributed Business Process Business-to-Business Integration Tightly Coupled Interaction vs. Loosely Coupled Interaction Messaging Message Channel Message Pipes and Filters Message Router Message Translator Message Endpoint Message Channel Point-to-Point Channel Publish-Subscribe Channel Datatype Channel Invalid Message Channel Dead Letter Channel Guaranteed Delivery Channel Adapter Messaging Bridge Message Bus Message Construction Command Message Document Message Event Message Request-Reply Return Address Correlation Identifier Message Sequence Message Expiration Format Indicator Message Router Content-Based Router Message Filter Dynamic Router Recipient List Splitter Aggregator Resequencer Composed Message Processor Scatter-Gather Routing Slip Process Manager Message Broker Message Transformation Envelope Wrapper Content Enricher Content Filter Claim Check Normalizer Canonical Data Model Message Endpoint Messaging
https://en.wikipedia.org/wiki/Turbulent%20Prandtl%20number
The turbulent Prandtl number (Prt) is a non-dimensional term defined as the ratio between the momentum eddy diffusivity and the heat transfer eddy diffusivity. It is useful for solving the heat transfer problem of turbulent boundary layer flows. The simplest model for Prt is the Reynolds analogy, which yields a turbulent Prandtl number of 1. From experimental data, Prt has an average value of 0.85, but ranges from 0.7 to 0.9 depending on the Prandtl number of the fluid in question. Definition The introduction of eddy diffusivity and subsequently the turbulent Prandtl number works as a way to define a simple relationship between the extra shear stress and heat flux that is present in turbulent flow. If the momentum and thermal eddy diffusivities are zero (no apparent turbulent shear stress and heat flux), then the turbulent flow equations reduce to the laminar equations. We can define the eddy diffusivities for momentum transfer and heat transfer as and where is the apparent turbulent shear stress and is the apparent turbulent heat flux.The turbulent Prandtl number is then defined as The turbulent Prandtl number has been shown to not generally equal unity (e.g. Malhotra and Kang, 1984; Kays, 1994; McEligot and Taylor, 1996; and Churchill, 2002). It is a strong function of the molecular Prandtl number amongst other parameters and the Reynolds Analogy is not applicable when the molecular Prandtl number differs significantly from unity as determined by Malhotra and Kang; and elaborated by McEligot and Taylor and Churchill Application Turbulent momentum boundary layer equation: Turbulent thermal boundary layer equation, Substituting the eddy diffusivities into the momentum and thermal equations yieldsandSubstitute into the thermal equation using the definition of the turbulent Prandtl number to get Consequences In the special case where the Prandtl number and turbulent Prandtl number both equal unity (as in the Reynolds analogy), the velocity profile and
https://en.wikipedia.org/wiki/Knudsen%20cell
In crystal growth, a Knudsen cell is an effusion evaporator source for relatively low partial pressure elementary sources (e.g. Ga, Al, Hg, As). Because it is easy to control the temperature of the evaporating material in Knudsen cells, they are commonly used in molecular-beam epitaxy. Development The Knudsen effusion cell was developed by Martin Knudsen (1871–1949). A typical Knudsen cell contains a crucible (made of pyrolytic boron nitride, quartz, tungsten or graphite), heating filaments (often made of metal tantalum), water cooling system, heat shields, and an orifice shutter. Vapor pressure measurement The Knudsen cell is used to measure the vapor pressures of a solid with very low vapor pressure. Such a solid forms a vapor at low pressure by sublimation. The vapor slowly effuses through the pinhole, and the loss of mass is proportional to the vapor pressure and can be used to determine this pressure. The heat of sublimation can also be determined by measuring the vapor pressure as a function of temperature, using the Clausius–Clapeyron relation.
https://en.wikipedia.org/wiki/Acrasin
Each species of slime mold has its own specific chemical messenger, which are collectively referred to as acrasins. These chemicals signal that many individual cells aggregate to form a single large cell or plasmodium. One of the earliest acrasins to be identified was cyclic AMP, found in the species Dictyostelium discoideum by Brian Shaffer, which exhibits a complex swirling-pulsating spiral pattern when forming a pseudoplasmodium. The term acrasin was descriptively named after Acrasia from Edmund Spenser's Faerie Queene, who seduced men against their will and then transformed them into beasts. Acrasia is itself a play on the Greek akrasia that describes loss of free will. Extraction Brian Shaffer was the first to purify acrasin, now known to be cyclic AMP, in 1954, using methanol. Glorin, the acrasin of P. violaceum, can be purified by inhibiting the acrasin-degrading enzyme acrasinase with alcohol, extracting with alcohol and separating with column chromatography. Notes Evidence for the formation of cell aggregates by chemotaxis in the development of the slime mold Dictyostelium discoideum - J.T.Bonner and L.J.Savage Journal of Experimental Biology Vol. 106, pp. 1, October (1947) Cell Biology Aggregation in cellular slime moulds: in vitro isolation of acrasin - B.M.Shaffer Nature Vol. 79, pp. 975, (1953) Cell Biology Identification of a pterin as the acrasin of the cellular slime mold Dictyostelium lacteum - Proceedings of the National Academy of Sciences United States Vol. 79, pp. 6270–6274, October (1982) Cell Biology Hunting Slime Moulds - Adele Conover, Smithsonian Magazine Online (2001)
https://en.wikipedia.org/wiki/Evaluation%20of%20machine%20translation
Various methods for the evaluation for machine translation have been employed. This article focuses on the evaluation of the output of machine translation, rather than on performance or usability evaluation. Round-trip translation A typical way for lay people to assess machine translation quality is to translate from a source language to a target language and back to the source language with the same engine. Though intuitively this may seem like a good method of evaluation, it has been shown that round-trip translation is a "poor predictor of quality". The reason why it is such a poor predictor of quality is reasonably intuitive. A round-trip translation is not testing one system, but two systems: the language pair of the engine for translating into the target language, and the language pair translating back from the target language. Consider the following examples of round-trip translation performed from English to Italian and Portuguese from Somers (2005): {| !Original text | Select this link to look at our home page. |- !Translated | Selezioni questo collegamento per guardare il nostro Home Page. |- !Translated back | Selections this connection in order to watch our Home Page. |} {| !Original text | Tit for tat |- !Translated | Melharuco para o tat |- !Translated back | Tit for tat |} In the first example, where the text is translated into Italian then back into English—the English text is significantly garbled, but the Italian is a serviceable translation. In the second example, the text translated back into English is perfect, but the Portuguese translation is meaningless; the program thought "tit" was a reference to a tit (bird), which was intended for a "tat", a word it did not understand. While round-trip translation may be useful to generate a "surplus of fun," the methodology is deficient for serious study of machine translation quality. Human evaluation This section covers two of the large scale evaluation studies that have had significant
https://en.wikipedia.org/wiki/Global%20hectare
The global hectare (gha) is a measurement unit for the ecological footprint of people or activities and the biocapacity of the Earth or its regions. One global hectare is the world's annual amount of biological production for human use and human waste assimilation, per hectare of biologically productive land and fisheries. It measures production and consumption of different products. It starts with the total biological production and waste assimilation in the world, including crops, forests (both wood production and CO2 absorption), grazing and fishing. The total of these kinds of production, weighted by the richness of the land they use, is divided by the number of hectares used. Biologically productive areas include cropland, forest and fishing grounds, and do not include deserts, glaciers and the open ocean. "Global hectares per person" refers to the amount of production and waste assimilation per person on the planet. In 2012 there were approximately 12.2 billion global hectares of production and waste assimilation, averaging 1.7 global hectares per person. Consumption totaled 20.1 billion global hectares or 2.8 global hectares per person, meaning about 65% more was consumed than produced. This is possible because there are natural reserves all around the globe that function as backup food, material and energy supplies, although only for a relatively short period of time. Due to rapid population growth, these reserves are being depleted at an ever increasing tempo. See Earth Overshoot Day. The term "global hectare" was introduced in the early 2000s, based on a similar concept from the 1970s named "ghost acreage". Opponents and defenders of the concept have discussed its strengths and weaknesses. Applications The global hectare is a useful measure of biocapacity as it can convert things like human dietary requirements into common units, which can show how many people a certain region on earth can sustain, assuming current technologies and agricultural method
https://en.wikipedia.org/wiki/Milman%27s%20reverse%20Brunn%E2%80%93Minkowski%20inequality
In mathematics, particularly, in asymptotic convex geometry, Milman's reverse Brunn–Minkowski inequality is a result due to Vitali Milman that provides a reverse inequality to the famous Brunn–Minkowski inequality for convex bodies in n-dimensional Euclidean space Rn. Namely, it bounds the volume of the Minkowski sum of two bodies from above in terms of the volumes of the bodies. Introduction Let K and L be convex bodies in Rn. The Brunn–Minkowski inequality states that where vol denotes n-dimensional Lebesgue measure and the + on the left-hand side denotes Minkowski addition. In general, no reverse bound is possible, since one can find convex bodies K and L of unit volume so that the volume of their Minkowski sum is arbitrarily large. Milman's theorem states that one can replace one of the bodies by its image under a properly chosen volume-preserving linear map so that the left-hand side of the Brunn–Minkowski inequality is bounded by a constant multiple of the right-hand side. The result is one of the main structural theorems in the local theory of Banach spaces. Statement of the inequality There is a constant C, independent of n, such that for any two centrally symmetric convex bodies K and L in Rn, there are volume-preserving linear maps φ and ψ from Rn to itself such that for any real numbers s, t > 0 One of the maps may be chosen to be the identity. Notes
https://en.wikipedia.org/wiki/Regular%20Hadamard%20matrix
In mathematics a regular Hadamard matrix is a Hadamard matrix whose row and column sums are all equal. While the order of a Hadamard matrix must be 1, 2, or a multiple of 4, regular Hadamard matrices carry the further restriction that the order be a square number. The excess, denoted E(H), of a Hadamard matrix H of order n is defined to be the sum of the entries of H. The excess satisfies the bound |E(H)| ≤ n3/2. A Hadamard matrix attains this bound if and only if it is regular. Parameters If n = 4u2 is the order of a regular Hadamard matrix, then the excess is ±8u3 and the row and column sums all equal ±2u. It follows that each row has 2u2 ± u positive entries and 2u2 ∓ u negative entries. The orthogonality of rows implies that any two distinct rows have exactly u2 ± u positive entries in common. If H is interpreted as the incidence matrix of a block design, with 1 representing incidence and −1 representing non-incidence, then H corresponds to a symmetric 2-(v,k,λ) design with parameters (4u2, 2u2 ± u, u2 ± u). A design with these parameters is called a Menon design. Construction A number of methods for constructing regular Hadamard matrices are known, and some exhaustive computer searches have been done for regular Hadamard matrices with specified symmetry groups, but it is not known whether every even perfect square is the order of a regular Hadamard matrix. Bush-type Hadamard matrices are regular Hadamard matrices of a special form, and are connected with finite projective planes. History and naming Like Hadamard matrices more generally, regular Hadamard matrices are named after Jacques Hadamard. Menon designs are named after P Kesava Menon, and Bush-type Hadamard matrices are named after Kenneth A. Bush.
https://en.wikipedia.org/wiki/Younger%20Dryas%20impact%20hypothesis
The Younger Dryas impact hypothesis (YDIH) or Clovis comet hypothesis is a speculative attempt to explain the onset of the Younger Dryas (YD) cooling at the end of the Last Glacial Period, around 12,900 years ago. The hypothesis is controversial and not widely accepted by relevant experts. It is an alternative to the long-standing and widely accepted explanation that it was caused by a significant reduction in, or shutdown of the North Atlantic Conveyor due to a sudden influx of freshwater from Lake Agassiz and deglaciation in North America. The YDIH posits that fragments of a large (more than 4 kilometers in diameter), disintegrating asteroid or comet struck North America, South America, Europe, and western Asia, coinciding with the beginning of the Younger Dryas cooling event. Advocates proposed the existence of a Younger Dryas boundary (YDB) layer that can be identified by materials they interpret as evidence of multiple meteor air bursts and/or impacts across a large fraction of Earth’s surface. However, inconsistencies have been identified in the graphs they published, and authors have not yet responded to requests for clarification and have never made their raw data available. Some YDIH proponents have also proposed that this event triggered extensive biomass burning, a brief impact winter and the Younger Dryas abrupt climate change, contributed to extinctions of late Pleistocene megafauna, and resulted in the end of the Clovis culture. Comet research group (CRG) Members of this group have been criticized for promoting pseudoscience, pseudoarchaeology, and pseudohistory, engaging in cherry-picking of data based on confirmation bias, seeking to persuade via the bandwagon fallacy, and even engaging in intentional misrepresentations of archaeological and geological evidence. For example, physicist Mark Boslough, a specialist in planetary impact hazards and asteroid impact avoidance, has pointed out many problems with the credibility and motivations of indivi
https://en.wikipedia.org/wiki/Multiply-with-carry%20pseudorandom%20number%20generator
In computer science, multiply-with-carry (MWC) is a method invented by George Marsaglia for generating sequences of random integers based on an initial set from two to many thousands of randomly chosen seed values. The main advantages of the MWC method are that it invokes simple computer integer arithmetic and leads to very fast generation of sequences of random numbers with immense periods, ranging from around to . As with all pseudorandom number generators, the resulting sequences are functions of the supplied seed values. General theory An MWC generator is a special form of Lehmer random number generator which allows efficient implementation of a prime modulus much larger than the machine word size. Normal Lehmer generator implementations choose a modulus close to the machine word size. An MWC generator instead maintains its state in base , so multiplying by is done implicitly by shifting one word. The base is typically chosen to equal the computer's word size, as this makes arithmetic modulo trivial. This may vary from for a microcontroller to . (This article uses for examples.) The initial state ("seed") values are arbitrary, except that they must not be all zero, nor all at the maximum permitted values ( and ). (This is commonly done by choosing between 1 and .). The MWC sequence is then a sequence of pairs determined by This is called a lag-1 MWC sequence. Sometimes an odd base is preferred, in which case can be used, which is almost as simple to implement. A lag- sequence is a generalization of the lag-1 sequence allowing longer periods. The lag- MWC sequence is then a sequence of pairs (for ) determined by and the MWC generator output is the sequence of 's, In this case, the initial state ("seed") values must not be all zero nor and . The MWC multiplier and lag determine the modulus . In practice, is chosen so the modulus is prime and the sequence has long period. If the modulus is prime, The period of a lag- MWC gener
https://en.wikipedia.org/wiki/Bioeconomy
Biobased economy, bioeconomy or biotechonomy is economic activity involving the use of biotechnology and biomass in the production of goods, services, or energy. The terms are widely used by regional development agencies, national and international organizations, and biotechnology companies. They are closely linked to the evolution of the biotechnology industry and the capacity to study, understand, and manipulate genetic material that has been possible due to scientific research and technological development. This includes the application of scientific and technological developments to agriculture, health, chemical, and energy industries. The terms bioeconomy (BE) and bio-based economy (BBE) are sometimes used interchangeably. However, it is worth to distinguish them: the biobased economy takes into consideration the production of non-food goods, whilst bioeconomy covers both bio-based economy and the production and use of food and feed. More than 60 countries and regions have bioeconomy or bioscience-related strategies, of which 20 have published dedicated bioeconomy strategies in Africa, Asia, Europe, Oceania, and the Americas. Definitions Bioeconomy has large variety of definitions. The bioeconomy comprises those parts of the economy that use renewable biological resources from land and sea – such as crops, forests, fish, animals and micro-organisms – to produce food, health, materials, products, textiles and energy. The definitions and usage does however vary between different areas of the world. An important aspect of the bioeconomy is understanding mechanisms and processes at the genetic, molecular, and genomic levels, and applying this understanding to creating or improving industrial processes, developing new products and services, and producing new energy. Bioeconomy aims to reduce our dependence on fossil natural resources, to prevent biodiversity loss and to create new economic growth and jobs that are in line with the principles of sustainable develo
https://en.wikipedia.org/wiki/Microturbulence
Microturbulence is a form of turbulence that varies over small distance scales. (Large-scale turbulence is called macroturbulence.) Stellar Microturbulence is one of several mechanisms that can cause broadening of the absorption lines in the stellar spectrum. Stellar microturbulence varies with the effective temperature and the surface gravity. The microturbulent velocity is defined as the microscale non-thermal component of the gas velocity in the region of spectral line formation. Convection is the mechanism believed to be responsible for the observed turbulent velocity field, both in low mass stars and massive stars. When examined by a spectroscope, the velocity of the convective gas along the line of sight produces Doppler shifts in the absorption bands. It is the distribution of these velocities along the line of sight that produces the microturbulence broadening of the absorption lines in low mass stars that have convective envelopes. In massive stars convection can be present only in small regions below the surface; these sub-surface convection zones can excite turbulence at the stellar surface through the emission of acoustic and gravity waves. The strength of the microturbulence (symbolized by ξ, in units of km s−1) can be determined by comparing the broadening of strong lines versus weak lines. Magnetic nuclear fusion Microturbulence plays a critical role in energy transport during magnetic nuclear fusion experiments, such as the Tokamak.
https://en.wikipedia.org/wiki/Rear-projection%20television
Rear-projection television (RPTV) is a type of large-screen television display technology. Until approximately 2006, most of the relatively affordable consumer large screen TVs up to used rear-projection technology. A variation is a video projector, using similar technology, which projects onto a screen. Three types of projection systems are used in projection TVs. CRT rear-projection TVs were the earliest, and while they were the first to exceed 40", they were also bulky and the picture was unclear at close range. Newer technologies include DLP (reflective micromirror chip), LCD projectors, Laser TV and LCoS. They are capable of displaying high-definition video up to 1080p resolution, and examples include Sony's SXRD (Silicon X-tal Reflective Display), JVC's D-ILA (Digital Direct Drive Image Light Amplifier) and MicroDisplay Corporation's Liquid Fidelity. Background and history Necessity Cathode ray tube technology was very limited in the early days of television. It relied on conventional glass blowing methods largely unchanged in centuries. Since the tube had to contain a very high vacuum, the glass was under considerable stress, together with the low deflection angle of CRTs of the era, the practical size of CRTs without increasing their depth was limited. The largest practical tube that could be made that was capable of being mounted horizontally in a television cabinet of acceptable depth was around nine inches. Twelve inch tubes could be manufactured, but these were so long that they had to be mounted vertically and viewed via an angled mirror in the top of the cabinet. In 1936, the British government persuaded the British Broadcasting Corporation to launch a public high definition (for the era) television broadcasting service. The principal driver for the British government's move was to establish cathode ray tube production facilities which it believed would be vital if the anticipated World War 2 was to materialise. The ability to correct the deflecti
https://en.wikipedia.org/wiki/Forward%20anonymity
Forward anonymity is a property of a cryptographic system which prevents an attacker who has recorded past encrypted communications from discovering its contents and participants in the future. This property is analogous to forward secrecy. An example of a system which uses forward anonymity is a public key cryptography system, where the public key is well-known and used to encrypt a message, and an unknown private key is used to decrypt it. In this system, one of the keys is always said to be compromised, but messages and their participants are still unknown by anyone without the corresponding private key. In contrast, an example of a system which satisfies the perfect forward secrecy property is one in which a compromise of one key by an attacker (and consequent decryption of messages encrypted with that key) does not undermine the security of previously used keys. Forward secrecy does not refer to protecting the content of the message, but rather to the protection of keys used to decrypt messages. History Originally introduced by Whitfield Diffie, Paul van Oorschot, and Michael James Wiener to describe a property of STS (station-to-station protocol) involving a long term secret, either a private key or a shared password. Public Key Cryptography Public Key Cryptography is a common form of a forward anonymous system. It is used to pass encrypted messages, preventing any information about the message from being discovered if the message is intercepted by an attacker. It uses two keys, a public key and a private key. The public key is published, and is used by anyone to encrypt a plaintext message. The Private key is not well known, and is used to decrypt cyphertext. Public key cryptography is known as an asymmetric decryption algorithm because of different keys being used to perform opposing functions. Public key cryptography is popular because, while it is computationally easy to create a pair of keys, it is extremely difficult to determine the private key kno
https://en.wikipedia.org/wiki/Quillen%20adjunction
In homotopy theory, a branch of mathematics, a Quillen adjunction between two closed model categories C and D is a special kind of adjunction between categories that induces an adjunction between the homotopy categories Ho(C) and Ho(D) via the total derived functor construction. Quillen adjunctions are named in honor of the mathematician Daniel Quillen. Formal definition Given two closed model categories C and D, a Quillen adjunction is a pair (F, G): C D of adjoint functors with F left adjoint to G such that F preserves cofibrations and trivial cofibrations or, equivalently by the closed model axioms, such that G preserves fibrations and trivial fibrations. In such an adjunction F is called the left Quillen functor and G is called the right Quillen functor. Properties It is a consequence of the axioms that a left (right) Quillen functor preserves weak equivalences between cofibrant (fibrant) objects. The total derived functor theorem of Quillen says that the total left derived functor LF: Ho(C) → Ho(D) is a left adjoint to the total right derived functor RG: Ho(D) → Ho(C). This adjunction (LF, RG) is called the derived adjunction. If (F, G) is a Quillen adjunction as above such that F(c) → d with c cofibrant and d fibrant is a weak equivalence in D if and only if c → G(d) is a weak equivalence in C then it is called a Quillen equivalence of the closed model categories C and D. In this case the derived adjunction is an adjoint equivalence of categories so that LF(c) → d is an isomorphism in Ho(D) if and only if c → RG(d) is an isomorphism in Ho(C).
https://en.wikipedia.org/wiki/Biomolecular%20engineering
Biomolecular engineering is the application of engineering principles and practices to the purposeful manipulation of molecules of biological origin. Biomolecular engineers integrate knowledge of biological processes with the core knowledge of chemical engineering in order to focus on molecular level solutions to issues and problems in the life sciences related to the environment, agriculture, energy, industry, food production, biotechnology and medicine. Biomolecular engineers purposefully manipulate carbohydrates, proteins, nucleic acids and lipids within the framework of the relation between their structure (see: nucleic acid structure, carbohydrate chemistry, protein structure,), function (see: protein function) and properties and in relation to applicability to such areas as environmental remediation, crop and livestock production, biofuel cells and biomolecular diagnostics. The thermodynamics and kinetics of molecular recognition in enzymes, antibodies, DNA hybridization, bio-conjugation/bio-immobilization and bioseparations are studied. Attention is also given to the rudiments of engineered biomolecules in cell signaling, cell growth kinetics, biochemical pathway engineering and bioreactor engineering. Timeline History During World War II, the need for large quantities of penicillin of acceptable quality brought together chemical engineers and microbiologists to focus on penicillin production. This created the right conditions to start a chain of reactions that lead to the creation of the field of biomolecular engineering. Biomolecular engineering was first defined in 1992 by the U.S. National Institutes of Health as research at the interface of chemical engineering and biology with an emphasis at the molecular level". Although first defined as research, biomolecular engineering has since become an academic discipline and a field of engineering practice. Herceptin, a humanized Mab for breast cancer treatment, became the first drug designed by a biomolecula
https://en.wikipedia.org/wiki/Patterned%20media
Patterned media (also known as bit-patterned media or BPM) is a potential future hard disk drive technology to record data in magnetic islands (one bit per island), as opposed to current hard disk drive technology where each bit is stored in within a continuous magnetic film. The islands would be patterned from a precursor magnetic film using nanolithography. It is one of the proposed technologies to succeed perpendicular recording due to the greater storage densities it would enable. BPM was introduced by Toshiba in 2010. Comparison with existing HDD technology In existing hard disk drives, data is stored in a thin magnetic film. This film is deposited so that it consists of isolated (weakly exchange coupled) grains of material of around diameter. One bit of data consists of around that are magnetized in the same direction (either "up" or "down", with respect to the plane of the disk). One method of increasing storage density has been to reduce the average grain volume. However, the energy barrier for thermal switching is proportional to the grain volume. With existing materials, further reductions in the grain volume would result in data loss occurring spontaneously due to superparamagnetism. In patterned media, the thin magnetic film is first deposited so there is strong exchange coupling between the grains. Using nanolithography, it is then patterned into magnetic islands. The strong exchange coupling means that the energy barrier is now proportional to the island volume, rather than the volume of individual grains within the island. Therefore, storage density increases can be achieved by patterning islands of increasingly small diameter, whilst maintaining thermal stability. Patterned media is predicted to enable areal densities up to as opposed to the limit that exists with current HDD technology. Differences in read/write head control strategies In existing HDDs data bits are ideally written on concentric circular tracks. This process is different in
https://en.wikipedia.org/wiki/Sexing
Sexing or gender identification is the process of determining the sex of an individual animal. Through sexing, biologists and agricultural workers determine the sex of livestock and other animals they work with. The specialized trade of chicken sexing has a particular importance in the poultry industry. The sex of mammals can often be determined using sexually dimorphic characteristics. Assisted physical sexing is relevant in vertebrates with cloacae (e.g. birds, reptiles or amphibians) when there is no external sexual dimorphism. In veterinary practice, fibroscopy is used under general anaesthesia in birds such as parrots. Molecular sexing is a set of techniques that use DNA for determining sex in wild or domestic species (population studies, farming, genetics) or humans (archaeology, forensic medicine). Markers commonly used include amelogenin, SRY and ZFX/ZFY. Various techniques have been developed using simple polymerase chain reaction product size dimorphism, presence/absence, restriction dimorphism, or even sequencing.
https://en.wikipedia.org/wiki/Valve%20audio%20amplifier%20technical%20specification
Technical specifications and detailed information on the valve audio amplifier, including its development history. Circuitry and performance Characteristics of valves Valves (also known as vacuum tubes) are very high input impedance (near infinite in most circuits) and high-output impedance devices. They are also high-voltage / low-current devices. The characteristics of valves as gain devices have direct implications for their use as audio amplifiers, notably that power amplifiers need output transformers (OPTs) to translate a high-output-impedance high-voltage low-current signal into a lower-voltage high-current signal needed to drive modern low-impedance loudspeakers (cf. transistors and FETs which are relatively low voltage devices but able to carry large currents directly). Another consequence is that since the output of one stage is often at ~100 V offset from the input of the next stage, direct coupling is normally not possible and stages need to be coupled using a capacitor or transformer. Capacitors have little effect on the performance of amplifiers. Interstage transformer coupling is a source of distortion and phase shift, and was avoided from the 1940s for high-quality applications; transformers also add cost, bulk, and weight. Basic circuits The following circuits are simplified conceptual circuits only, real world circuits also require a smoothed or regulated power supply, heater for the filaments (the details depending on if the selected valve types are directly or indirectly heated), and the cathode resistors are often bypassed, etc. The common cathode gain stage The basic gain stage for a valve amplifier is the auto-biased common cathode stage, in which an anode resistor, the valve, and a cathode resistor form a potential divider across the supply rails. The resistance of the valve varies as a function of the voltage on the grid, relative to the voltage on the cathode. In the auto-bias configuration, the "operating point" is obtained by se
https://en.wikipedia.org/wiki/Borell%E2%80%93Brascamp%E2%80%93Lieb%20inequality
In mathematics, the Borell–Brascamp–Lieb inequality is an integral inequality due to many different mathematicians but named after Christer Borell, Herm Jan Brascamp and Elliott Lieb. The result was proved for p > 0 by Henstock and Macbeath in 1953. The case p = 0 is known as the Prékopa–Leindler inequality and was re-discovered by Brascamp and Lieb in 1976, when they proved the general version below; working independently, Borell had done the same in 1975. The nomenclature of "Borell–Brascamp–Lieb inequality" is due to Cordero-Erausquin, McCann and Schmuckenschläger, who in 2001 generalized the result to Riemannian manifolds such as the sphere and hyperbolic space. Statement of the inequality in Rn Let 0 < λ < 1, let −1 / n ≤ p ≤ +∞, and let f, g, h : Rn → [0, +∞) be integrable functions such that, for all x and y in Rn, where and . Then (When p = −1 / n, the convention is to take p / (n p + 1) to be −∞; when p = +∞, it is taken to be 1 / n.)
https://en.wikipedia.org/wiki/Active%20message
An Active message (in computing) is a messaging object capable of performing processing on its own. It is a lightweight messaging protocol used to optimize network communications with an emphasis on reducing latency by removing software overheads associated with buffering and providing applications with direct user-level access to the network hardware. This contrasts with traditional computer-based messaging systems in which messages are passive entities with no processing power. Distributed Memory Programming Active messages are communications primitive for exploiting the full performance and flexibility of modern computer interconnects. They are often classified as one of the three main types of distributed memory programming, the other two being data parallel and message passing. The view is that Active Messages are actually a lower-level mechanism that can be used to implement data parallel or message passing efficiently. The basic idea is that each message has a header containing the address or index of a userspace handler to be executed upon message arrival, with the contents of the message passed as an argument to the handler. Early active message systems passed the actual remote code address across the network, however this approach required the initiator to know the address of the remote handler function when composing a message, which can be quite limiting even within the context of a SPMD programming model (and generally relies upon address space uniformity which is absent in many modern systems). Newer active message interfaces require the client to register a table with the software at initialization time that maps an integer index to the local address of a handler function; in these systems the sender of an active message provides an index into the remote handler table, and upon arrival of the active message the table is used to map this index to the handler address that is invoked to handle the message. Other variations of active messages carry
https://en.wikipedia.org/wiki/Sherlock%20Holmes%20and%20the%20Secret%20Weapon
Sherlock Holmes and the Secret Weapon (1942) is the fourth in the Basil Rathbone/Nigel Bruce series of 14 Sherlock Holmes films which updated the characters created by Sir Arthur Conan Doyle to the then present day. The film is credited as an adaptation of Conan Doyle's 1903 short story "The Adventure of the Dancing Men," though the only element from the source material is the dancing men code. Rather, it is a spy film taking place on the background of the then ongoing Second World War with an original premise. The film concerns the kidnapping of a Swiss scientist by their nemesis Professor Moriarty, to steal a new bomb sight and sell it to Nazi Germany. Sherlock Holmes and Dr. John Watson have to crack a secret code in order to save the country. The film is one of four films in the series which are in the public domain. Plot Sherlock Holmes (Basil Rathbone) pretends to be a Nazi spy to aid scientist Dr. Franz Tobel (William Post Jr.) and his new invention, a bombsight, in escaping a Gestapo trap in Switzerland. Holmes and Franz fly to London, where Holmes places him under the protection of his friend, Dr. Watson (Nigel Bruce). The scientist slips away against Holmes' instructions for a secret reunion with his fiancee, Charlotte Eberli (Kaaren Verne), and gives her an envelope containing a coded message. He tells Charlotte to give it to Holmes if anything should happen to him. German spies' attempt to abduct Tobel as he leaves Charlotte's apartment is foiled by a passing London bobby. Tobel successfully demonstrates the bombsight for Sir Reginald Bailey (Holmes Herbert) and observers from Bomber Command. Tobel, now under the protection of Inspector Lestrade (Dennis Hoey) and Scotland Yard, tells Sir Reginald that, although willing to provide the British with his bombsight, only he will know its secret and has a complex plan for its manufacture to keep the secret safe. He separates his invention into four parts and gives one to each of four Swiss scientists, know
https://en.wikipedia.org/wiki/Birotunda
In geometry, a birotunda is any member of a family of dihedral-symmetric polyhedra, formed from two rotunda adjoined through the largest face. They are similar to a bicupola but instead of alternating squares and triangles, it alternates pentagons and triangles around an axis. There are two forms, ortho- and gyro-: an orthobirotunda has one of the two rotundas is placed as the mirror reflection of the other, while in a gyrobirotunda one rotunda is twisted relative to the other. The pentagonal birotundas can be formed with regular faces, one a Johnson solid, the other a semiregular polyhedron: pentagonal orthobirotunda, pentagonal gyrobirotunda, which is also called an icosidodecahedron. Other forms can be generated with dihedral symmetry and distorted equilateral pentagons. Examples See also Gyroelongated pentagonal birotunda Elongated pentagonal orthobirotunda Elongated pentagonal gyrobirotunda
https://en.wikipedia.org/wiki/John%20Bridges%20%28software%20developer%29
John Bridges is the co-author of the computer program PCPaint and primary developer of the program GRASP for Microtex Industries with Doug Wolfgram. He is also the sole author of GLPro and AfterGRASP. His article entitled "Differential Image Compression" was published in the February 1991 issue of Dr. Dobb's Journal. Early work In 1980 Bridges started his programming career at the NYU Institute for Reconstructive Plastic Surgery as a summer intern, working with sophisticated programmable vector graphics systems. He wrote editing tools and also updated and debugged software used for early 3D x-ray scanning research. From 1981-85 Bridges wrote the RAM disk drivers, utilities, cracking software, task switching software, and memory test diagnostics for Abacus, a maker of large memory cards for the Apple II. In 1982, he started working for Classroom Consortia Media, Inc., an educational software company, developing and writing Apple and IBM graphics libraries and tools for their software. During his tenure there he created a drawing program called SuperDraw for CCM, and on his own wrote the core graphics code for what would later become PCPaint, as well as develop the GRASP GL library format. PCPaint In 1984, Bridges developed the first version of PCPaint with Doug Wolfgram for Mouse Systems. PCPaint was the first IBM PC-based mouse driven GUI paint program. The company purchased the exclusive rights to PCPaint, and John continued development until 1990. GRASP In 1985, Bridges' PCPaint code and Doug's slideshow program morphed into a new program, GRASP. GRASP was the first multimedia animation program for the IBM PC and created the GRASP GL library format. GRASP was originally released as shareware through Doug's company, Microtex Industries. However, version 2.0 and after were sold commercially by Paul Mace Software. Doug sold his shares of both PCPaint and GRASP to Bridges in 1990, and Bridges' work on GRASP continued through 1994, when he terminated the con
https://en.wikipedia.org/wiki/Clipping%20%28signal%20processing%29
Clipping is a form of distortion that limits a signal once it exceeds a threshold. Clipping may occur when a signal is recorded by a sensor that has constraints on the range of data it can measure, it can occur when a signal is digitized, or it can occur any other time an analog or digital signal is transformed, particularly in the presence of gain or overshoot and undershoot. Clipping may be described as hard, in cases where the signal is strictly limited at the threshold, producing a flat cutoff; or it may be described as soft, in cases where the clipped signal continues to follow the original at a reduced gain. Hard clipping results in many high-frequency harmonics; soft clipping results in fewer higher-order harmonics and intermodulation distortion components. Audio In the frequency domain, clipping produces strong harmonics in the high-frequency range (as the clipped waveform comes closer to a squarewave). The extra high-frequency weighting of the signal could make tweeter damage more likely than if the signal was not clipped. Many electric guitar players intentionally overdrive their amplifiers (or insert a "fuzz box") to cause clipping in order to get a desired sound (see guitar distortion). In general, the distortion associated with clipping is unwanted, and is visible on an oscilloscope even if it is inaudible. Images In the image domain, clipping is seen as desaturated (washed-out) bright areas that turn to pure white if all color components clip. In digital colour photography, it is also possible for individual colour channels to clip, which results in inaccurate colour reproduction. Causes Analog circuitry A circuit designer may intentionally use a clipper or clamper to keep a signal within a desired range. When an amplifier is pushed to create a signal with more power than it can support, it will amplify the signal only up to its maximum capacity, at which point the signal will be amplified no further. An integrated circuit or discrete
https://en.wikipedia.org/wiki/Attribute-based%20access%20control
Attribute-based access control (ABAC), also known as policy-based access control for IAM, defines an access control paradigm whereby a subject's authorization to perform a set of operations is determined by evaluating attributes associated with the subject, object, requested operations, and, in some cases, environment attributes. ABAC is a method of implementing access control policies that is highly adaptable and can be customized using a wide range of attributes, making it suitable for use in distributed or rapidly changing environments. The only limitations on the policies that can be implemented with ABAC are the capabilities of the computational language and the availability of relevant attributes. ABAC policy rules are generated as Boolean functions of the subject's attributes, the object's attributes, and the environment attributes. Unlike role-based access control (RBAC), which defines roles that carry a specific set of privileges associated with them and to which subjects are assigned, ABAC can express complex rule sets that can evaluate many different attributes. Through defining consistent subject and object attributes into security policies, ABAC eliminates the need for explicit authorizations to individuals’ subjects needed in a non-ABAC access method, reducing the complexity of managing access lists and groups. Attribute values can be set-valued or atomic-valued. Set-valued attributes contain more than one atomic value. Examples are role and project. Atomic-valued attributes contain only one atomic value. Examples are clearance and sensitivity. Attributes can be compared to static values or to one another, thus enabling relation-based access control. Although the concept itself existed for many years, ABAC is considered a "next generation" authorization model because it provides dynamic, context-aware and risk-intelligent access control to resources allowing access control policies that include specific attributes from many different information sy
https://en.wikipedia.org/wiki/Space-based%20architecture
A Space-based architecture (SBA) is an approach to distributed computing systems where the various components interact with each other by exchanging tuples or entries via one or more shared spaces. This is contrasted with the more common Message queuing service approaches where the various components interact with each other by exchanging messages via a message broker. In a sense, both approaches exchange messages with some central agent, but how they exchange messages is very distinctive. An analogy might be where a message broker is like an Academic conference, where each presenter has the stage, and presents in the order they are scheduled; whereas a tuple space is like an Unconference, where all participants can write on a common whiteboard concurrently, and all can see it at the same time. Tuple Spaces each space is like a 'channel' in a message broker system that components can choose to interact with components can write a 'tuple' or 'entry' into a space, while other components can read entries/tuples from the space, but using more powerful mechanisms than message brokers writing entries to a space is generally not ordered as in a message broker, but can be if necessary designing applications using this approach is less intuitive to most people, and can present more cognitive load to appreciate and exploit Message Brokers each broker typically supports multiple 'channels' that components can choose to interact with components write 'messages' to a channel, while other components read messages from the channel writing messages to a channel is generally in order, where they are generally read out in the same order designing applications using this approach is more intuitive to most people, sort of the way that NoSQL databases are more intuitive than SQL A key goal of both approaches is to create loosely-coupled systems that minimize configuration, especially shared knowledge of who does what, leading to the objectives of better availability, resili
https://en.wikipedia.org/wiki/Crush%20%28video%20game%29
Crush is a 2007 puzzle-platform game developed by Kuju Entertainment's Zoë Mode studio and published by Sega for the PlayStation Portable. Its protagonist is Danny, a young man suffering from insomnia, who uses an experimental device to explore his mind and discover the cause of his sleeplessness. Each level of the game, representing events from Danny's life and inspired by artists such as Tim Burton and M.C. Escher, requires the player to control Danny as he collects his "lost marbles" and other thoughts. Crushs primary gameplay feature involves manipulating each game level between 3D and 2D views, allowing the player to reach platforms and locations inaccessible from within a different view. This element was noted by critics to be similar to one in Super Paper Mario, also released in 2007, though the Zoë Mode team had envisioned the concept five years prior. Crush received positive reviews upon release, with critics praising its incorporation of this dimension-shifting component alongside other aspects of the game presentation. Though Crush won several gaming awards, including PSP game of the month, it failed to meet the developer's sales expectations. A port of the game for the Nintendo 3DS called CRUSH3D was announced in January 2011 and was made available in January 2012 in Europe; in February 2012 in Australia; and in March 2012 in North America. Plot While Crush and its Nintendo 3DS port CRUSH3D retain the same gameplay mechanics and premise, the two versions feature different plots. PSP version The protagonist of the game, a young man named Danny, suffers from chronic insomnia caused by worry, stress, and repressed memories. He is admitted to a mental institution for it, where he consults a mad scientist, Dr. Reubens, who treats Danny with his Cognitive Regression Utilizing pSychiatric Heuristics (C.R.U.S.H.) device, which has a sentient female persona. The device's helmet places Danny under hypnosis, during which he can regain control of his sanity by
https://en.wikipedia.org/wiki/Distributed%20networking
Distributed networking is a distributed computing network system where components of the program and data depend on multiple sources. Overview Distributed networking, used in distributed computing, is the network system over which computer programming, software, and its data are spread out across more than one computer, but communicate complex messages through their nodes (computers), and are dependent upon each other. The goal of a distributed network is to share resources, typically to accomplish a single or similar goal. Usually, this takes place over a computer network, however, internet-based computing is rising in popularity. Typically, a distributed networking system is composed of processes, threads, agents, and distributed objects. Merely distributed physical components is not enough to suffice as a distributed network; typically distributed networking uses concurrent program execution. Client/server Client/server computing is a type of distributed computing where one computer, a client, requests data from the server, a primary computing center, which responds to the client directly with the requested data, sometimes through an agent. Client/server distributed networking is also popular in web-based computing. Client/Server is the principle that a client computer can provide certain capabilities for a user and request others from other computers that provide services for the clients. The Web's Hypertext Transfer Protocol is basically all client/server. Agent-based A distributed network can also be agent-based, where what controls the agent or component is loosely defined, and the components can have either pre-configured or dynamic settings. Decentralized Decentralization is where each computer on the network can be used for the computing task at hand, which is the opposite of the client/server model. Typically, only idle computers are used, and in this way, it is thought that networks are more efficient. Peer-to-peer (P2P) computation is based on a
https://en.wikipedia.org/wiki/Johann%20Christian%20Martin%20Bartels
Johann Christian Martin Bartels (12 August 1769 – ) was a German mathematician. He was the tutor of Carl Friedrich Gauss in Brunswick and the educator of Lobachevsky at the University of Kazan. Biography Bartels was born in Brunswick, in the Duchy of Brunswick-Lüneburg (now part of Lower Saxony, Germany), the son of pewterer Heinrich Elias Friedrich Bartels and his wife Johanna Christine Margarethe Köhler. In his childhood he showed a great interest in mathematics. In 1783 he was employed as an assistant to the teacher Büttner in the Katherinenschule in Brunswick. He became acquainted with Carl Friedrich Gauss there and encouraged his talent and recommended him to the Duke of Brunswick who awarded Gauss a fellowship to the Collegium Carolinum (now Technical University of Brunswick). A friendship developed between Gauss and Bartels and they corresponded between 1799 and 1823. From 23 August 1788 he was a visitor at the Collegium Carolinum in Brunswick. On 23 October 1791 Bartels studied mathematics under Johann Friedrich Pfaff in Helmstedt and Abraham Gotthelf Kästner in Göttingen. In the winter semester of 1793/1794 he studied Experimental Physics, Astronomy, Meteorology and Geology under Georg Christoph Lichtenberg. In 1800 he worked in Switzerland as Professor of Mathematics in Reichenau (Canton Graubünden). In 1801 he was active in the cantonal school in Aarau. He married Anna Magdalena Saluz from Chur in 1802. The University of Jena promoted him to the Faculty of Philosophy in 1803. In 1807 he was invited to join the University of Kazan by the founder Stepan Jakowlewitsch Rumowski (1734–1812), and went there in 1808 where he was appointed to the chair of Mathematics. During his twelve years tenure he lectured on the History of Mathematics, Higher Arithmetic, Differential and Integral Calculus, Analytical Geometry and Trigonometry, Spherical Trigonometry, Analytical Mechanics and Astronomy. During this time he taught Nikolai Ivanovich Lobachevsky. I