source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Scoring%20functions%20for%20docking
In the fields of computational chemistry and molecular modelling, scoring functions are mathematical functions used to approximately predict the binding affinity between two molecules after they have been docked. Most commonly one of the molecules is a small organic compound such as a drug and the second is the drug's biological target such as a protein receptor. Scoring functions have also been developed to predict the strength of intermolecular interactions between two proteins or between protein and DNA. Utility Scoring functions are widely used in drug discovery and other molecular modelling applications. These include: Virtual screening of small molecule databases of candidate ligands to identify novel small molecules that bind to a protein target of interest and therefore are useful starting points for drug discovery De novo design (design "from scratch") of novel small molecules that bind to a protein target Lead optimization of screening hits to optimize their affinity and selectivity A potentially more reliable but much more computationally demanding alternative to scoring functions are free energy perturbation calculations. Prerequisites Scoring functions are normally parameterized (or trained) against a data set consisting of experimentally determined binding affinities between molecular species similar to the species that one wishes to predict. For currently used methods aiming to predict affinities of ligands for proteins the following must first be known or predicted: Protein tertiary structure – arrangement of the protein atoms in three-dimensional space. Protein structures may be determined by experimental techniques such as X-ray crystallography or solution phase NMR methods or predicted by homology modelling. Ligand active conformation – three-dimensional shape of the ligand when bound to the protein Binding-mode – orientation of the two binding partners relative to each other in the complex The above information yields the three-di
https://en.wikipedia.org/wiki/Information%20Trust%20Institute
The Information Trust Institute (ITI) was founded in 2004 as an interdisciplinary unit designed to approach information security research from a systems perspective. It examines information security by looking at what makes machines, applications, and users trustworthy. Its mission is to create computer systems, software, and networks that society can depend on to be trustworthy, meaning secure, dependable (reliable and available), correct, safe, private, and survivable. ITI's stated goal is to create a new paradigm for designing trustworthy systems from the ground up and validating systems that are intended to be trustworthy. Participants ITI is an academic/industry partnership focusing on application areas such as electric power, financial systems, defense, and homeland security, among others. It brings together over 100 researchers representing numerous colleges and units at the University of Illinois at Urbana–Champaign. Major centers within ITI Boeing Trusted Software Center CAESAR: the Center for Autonomous Engineering Systems and Robotics the Center for Information Forensics Center for Health Information Privacy and Security the NSA Center for Information Assurance Education and Research TCIPG: the Trustworthy Cyber Infrastructure for the Power Grid Center Trusted ILLIAC
https://en.wikipedia.org/wiki/Gravastar
A gravastar is an object hypothesized in astrophysics by Pawel O. Mazur and Emil Mottola as an alternative to the black hole theory. It has usual black hole metric outside of the horizon, but de Sitter metric inside. On the horizon there is a thin shell of matter. The term "gravastar" is a portmanteau of the words "gravitational vacuum star". Structure In the original formulation by Mazur and Mottola, gravastars contain a central region featuring a false vacuum or "dark energy", a thin shell of perfect fluid, and a true vacuum exterior. The dark energy-like behavior of the inner region prevents collapse to a singularity and the presence of the thin shell prevents the formation of an event horizon, avoiding the infinite blue-shift. The inner region has thermodynamically no entropy and may be thought of as a gravitational Bose–Einstein condensate. Severe red-shifting of photons as they climb out of the gravity well would make the fluid shell also seem very cold, almost absolute zero. In addition to the original thin shell formulation, gravastars with continuous pressure have been proposed. These objects must contain anisotropic stress. Externally, a gravastar appears similar to a black hole: It is visible by the high-energy radiation it emits while consuming matter, and by the Hawking radiation it creates. Astronomers search the sky for X-rays emitted by infalling matter to detect black holes. A gravastar would produce an identical signature. It is also possible, if the thin shell is transparent to radiation, that gravastars may be distinguished from ordinary black holes by different gravitational lensing properties as null geodesics may pass through. Mazur and Mottola suggest that the violent creation of a gravastar might be an explanation for the origin of our universe and many other universes, because all the matter from a collapsing star would implode "through" the central hole and explode into a new dimension and expand forever, which would be consistent
https://en.wikipedia.org/wiki/Kynurenine%20pathway
The kynurenine pathway is a metabolic pathway leading to the production of nicotinamide adenine dinucleotide (NAD+). Metabolites involved in the kynurenine pathway include tryptophan, kynurenine, kynurenic acid, xanthurenic acid, quinolinic acid, and 3-hydroxykynurenine. The kynurenine pathway is responsible for total catabolization of tryptophan about 95%. Disruption in the pathway is associated with certain genetic and psychiatric disorders. Kynurenine pathway dysfunction Disorders affecting the kynurenine pathway may be primary (of genetic origin) or secondary (due to inflammatory conditions). Peripheral inflammation can lead to a build up of kynurenine in the brain, and this is associated with major depressive disorder, bipolar disorder, and schizophrenia. Dysfunction of the pathway not only causes increase in amounts of metabolites such as quinolinic acid and kynurenic acid but also affects synthesis of serotonin and melatonin. Kynurenine clearance in exercised muscle cells can suppress the build up in the brain. Hydroxykynureninuria Also known as kynureninase deficiency, this extremely rare inherited disorder is caused by the defective enzyme kynureninase which leads to a block in the pathway from tryptophan to niacin (nicotinic acid). As a result, tryptophan is no longer a source of niacin, hence leading to pellagra (niacin deficiency). Both B6-responsive and B6-unresponsive forms are known. Patients with this disorder excrete excessive amounts of xanthurenic acid, kynurenic acid, 3-hydroxykynurenine, and kynurenine after tryptophan loading and are said to suffer from tachycardia, irregular breathing, arterial hypotension, cerebellar ataxia, developmental retardation, coma, renal tubular dysfunction, renal or metabolic acidosis, and even death. The only biochemical abnormality noted in affected patients was a massive hyperkynureninuria, seen only during periods of coma or after intravenous protein loading. This disturbance was temporarily corrected by
https://en.wikipedia.org/wiki/Klavs%20F.%20Jensen
Klavs Flemming Jensen (born August 5, 1952) is a chemical engineer who is currently the Warren K. Lewis Professor at the Massachusetts Institute of Technology (MIT). Jensen was elected a member of the National Academy of Engineering in 2002 for fundamental contributions to multi-scale chemical reaction engineering with important applications in microelectronic materials processing and microreactor technology. From 2007 to July 2015 he was the Head of the Department of Chemical Engineering at MIT. Education and career Jensen received his chemical engineering education from the Technical University of Denmark (M.Sc., 1976) and University of Wisconsin–Madison (PhD, 1980). Jensen's PhD advisor was W. Harmon Ray. In 1980, Jensen became assistant professor of chemical engineering and materials science at the University of Minnesota, before being promoted to associate professor in 1984 and full professor in 1988. In 1989, he moved to the Massachusetts Institute of Technology. At the Massachusetts Institute of Technology, Professor Jensen has been the Joeseph R. Mares Career Development Chair in Chemical Engineering (1989–1994), the Lammot du Pont Professor of Chemical Engineering (1996–2007), and the Warren K. Lewis Professor of Chemical Engineering (2007– present). Klavs served as Head of the MIT Department of Chemical Engineering from 2007–2015. In 2015, Professor Jensen became the founding Chair of the scientific journal Reaction Chemistry and Engineering by the Royal Society of Chemistry focused on bridging the gap between chemistry and chemical engineering. Research Jensen's research revolves around reaction and separation techniques for on-demand multistep synthesis, methods for automated synthesis, and microsystems biological discovery and manipulation. He is considered one of the pioneers of flow chemistry. Jensen, Armon Sharei and Robert S. Langer were the founders of SQZ Biotech. The trio, together with Andrea Adamo, developed the cell squeezing meth
https://en.wikipedia.org/wiki/Potassium%20sorbate
Potassium sorbate is the potassium salt of sorbic acid, chemical formula CH3CH=CH−CH=CH−CO2K. It is a white salt that is very soluble in water (58.2% at 20 °C). It is primarily used as a food preservative (E number 202). Potassium sorbate is effective in a variety of applications including food, wine, and personal-care products. While sorbic acid occurs naturally in rowan and hippophae berries, virtually all of the world's supply of sorbic acid, from which potassium sorbate is derived, is manufactured synthetically. Production Potassium sorbate is produced industrially by neutralizing sorbic acid with potassium hydroxide. The precursor sorbic acid is produced in a two-step process via the condensation of crotonaldehyde and ketene. Uses Potassium sorbate is used to inhibit molds and yeasts in many foods, such as cheese, wine, yogurt, dried meats, apple cider, dehydrated fruits, soft drinks and fruit drinks, and baked goods. It is used in the preparation of items such as hotcake syrup and milkshakes served by fast-food restaurants such as McDonald's. It can also be found in the ingredients list of many dried fruit products. In addition, herbal dietary supplement products generally contain potassium sorbate, which acts to prevent mold and microbes and to increase shelf life. It is used in quantities at which no adverse health effects are known, over short periods of time. Labeling of this preservative on ingredient statements reads as "potassium sorbate" or "E202". Also, it is used in many personal-care products to inhibit the development of microorganisms for shelf stability. Some manufacturers are using this preservative as a replacement for parabens. Tube feeding of potassium sorbate reduces the gastric burden of pathogenic bacteria. Also known as "wine stabilizer", potassium sorbate produces sorbic acid when added to wine. It serves two purposes. When active fermentation has ceased and the wine is racked for the final time after clearing, potassium sorbate
https://en.wikipedia.org/wiki/Plastic%20crystal
A plastic crystal is a crystal composed of weakly interacting molecules that possess some orientational or conformational degree of freedom. The name plastic crystal refers to the mechanical softness of such phases: they resemble waxes and are easily deformed. If the internal degree of freedom is molecular rotation, the name rotor phase or rotatory phase is also used. Typical examples are the modifications Methane I and Ethane I. In addition to the conventional molecular plastic crystals, there are also emerging ionic plastic crystals, particularly organic ionic plastic crystals (OIPCs) and protic organic ionic plastic crystals (POIPCs). POIPCs are solid protic organic salts formed by proton transfer from a Brønsted acid to a Brønsted base and in essence are protic ionic liquids in the molten state, have found to be promising solid-state proton conductors for high temperature proton-exchange membrane fuel cells. Examples include 1,2,4-triazolium perfluorobutanesulfonate and imidazolium methanesulfonate. If the internal degree of freedom freezes in a disordered way, an orientational glass is obtained. The orientational degree of freedom may be an almost free rotation, or it may be a jump diffusion between a restricted number of possible orientations, as was shown for carbon tetrabromide. X- ray diffraction patterns of plastic crystals are characterized by strong diffuse intensity in addition to the sharp Bragg peaks. In a powder pattern this intensity appears to resemble an amorphous background as one would expect for a liquid, but for a single crystal the diffuse contribution reveals itself to be highly structured. The Bragg peaks can be used to determine an average structure but due to the large amount of disorder this is not very insightful. It is the structure of the diffuse scattering that reflects the details of the constrained disorder in the system. Recent advances in two-dimensional detection at synchrotron beam lines facilitate the study of such pattern
https://en.wikipedia.org/wiki/Spaghettification
In astrophysics, spaghettification (sometimes referred to as the noodle effect) is the vertical stretching and horizontal compression of objects into long thin shapes (rather like spaghetti) in a very strong, non-homogeneous gravitational field. It is caused by extreme tidal forces. In the most extreme cases, near a black hole, the stretching and compression are so powerful that no object can resist it. Within a small region, the horizontal compression balances the vertical stretching so that a small object being spaghettified experiences no net change in volume. Stephen Hawking described the flight of a fictional astronaut who, passing within a black hole's event horizon, is "stretched like spaghetti" by the gravitational gradient (difference in gravitational force) from head to toe. The reason this happens would be that the gravitational force exerted by the singularity would be much stronger at one end of the body than the other. If one were to fall into a black hole feet first, the gravity at their feet would be much stronger than at their head, causing the person to be vertically stretched. Along with that, the right side of the body will be pulled to the left, and the left side of the body will be pulled to the right, horizontally compressing the person. However, the term "spaghettification" was established well before this. Spaghettification of a star was imaged for the first time in 2018 by researchers observing a pair of colliding galaxies approximately 150 million light-years from Earth. A simple example In this example, four separate objects are in the space above a planet, positioned in a diamond formation. The four objects follow the lines of the gravitoelectric field, directed towards the celestial body's centre. In accordance with the inverse-square law, the lowest of the four objects experiences the biggest gravitational acceleration, so that the whole formation becomes stretched into a line. These four objects are connected parts of a larger obj
https://en.wikipedia.org/wiki/Video%20interlude
A video interlude is an interlude during a performance that shows a video. Video interludes are often played in concerts, showing a music video (often made specifically for the show), usually featuring the artists while the artist takes a break or costume change. Video interludes have been used by Madonna since at least 1990.
https://en.wikipedia.org/wiki/ECS%20Solid%20State%20Letters
ECS Solid State Letters (SSL) is a peer-reviewed scientific journal covering the field of solid state science and technology. The journal was established in 2012 and is published by the Electrochemical Society. SSL ceased publication at the end of 2015. The editor-in-chief was Dennis W. Hess (Georgia Institute of Technology). According to the Journal Citation Reports, the journal has a 2014 impact factor of 1.162.
https://en.wikipedia.org/wiki/Synchronous%20Ethernet
Synchronous Ethernet, also referred as SyncE, is an ITU-T standard for computer networking that facilitates the transference of clock signals over the Ethernet physical layer. This signal can then be made traceable to an external clock. Overview The aim of Synchronous Ethernet is to provide a synchronization signal to those network resources that may eventually require such a type of signal. The Synchronous Ethernet signal transmitted over the Ethernet physical layer should be traceable to an external clock, ideally a master and unique clock for the whole network. Applications include cellular networks, access technologies such as Ethernet passive optical network, and applications such as IPTV or VoIP, as well as CERN's White Rabbit Project for sub-nanosecond time synchronization of data acquisition equipment for their high-energy experiments. Unlike time-division multiplexing networks, the Ethernet family of computer networks do not carry clock synchronization information. Several means are defined to address this issue. IETF’s Network Time Protocol, IEEE's 1588-2008 Precision Time Protocol are some of them. SyncE was standardized by the ITU-T, in cooperation with IEEE, as three recommendations: ITU-T Rec. G.8261 that defines aspects about the architecture and the wander performance of SyncE networks ITU-T Rec. G.8262 that specifies Synchronous Ethernet clocks for SyncE ITU-T Rec. G.8264 that describes the specification of Ethernet Synchronization Messaging Channel (ESMC) SyncE architecture minimally requires replacement of the internal clock of the Ethernet card by a phase locked loop in order to feed the Ethernet PHY. Architecture Extension of the synchronization network to consider Ethernet as a building block (ITU-T G.8261). This enables Synchronous Ethernet network equipment to be connected to the same synchronization network as Synchronous Digital Hierarchy (SDH). Synchronization for SDH can be transported over Ethernet and vice versa. Clocks ITU
https://en.wikipedia.org/wiki/Adult-gerontology%20nurse%20practitioner
An adult-gerontology nurse practitioner (AGNP) is a nurse practitioner that specializes in continuing and comprehensive healthcare for adults across the lifespan from adolescence to old age. Education and board certification Following educational preparation at the master's or doctoral level, AGNPs must become board certified by an approved certification body. Board certification must be maintained by obtaining nursing continuing education credits. To align with the Consensus Model for APRN Regulation developed by the National Council of State Boards of Nursing, certification exams and credentials are in transition. Prior to the consensus statement, adult health nurse practitioners (NPs) and gerontological NPs were educated and certified separately. The consensus model combined these into a single population focus. The specialty is further divided into primary care and acute care. In the US, board certification is provided through the American Association of Critical-Care Nurses (awards the ACNPC-AG credential for acute care), or the American Nurses Credentialing Center (awards the AGACNP-BC credential for acute care and the AGPCNP-BC credential for primary care), through the American Association of Nurse Practitioners certification program (awards the NP-C credential for primary care. Scope of practice AGNPs deliver a range of acute, chronic and preventive healthcare services. In addition to diagnosing and treating illness, they also provide preventive care, including routine checkups, health-risk assessments, immunization and screening tests, and personalized counseling on maintaining a healthy lifestyle. AGNPs also manage chronic illness, often coordinating care provided by specialty physicians. AGNPs that work in acute care settings often care for hospitalized patients in collaboration with physicians and other providers. AGNPs can be found practicing in a variety of medical facilities including hospices, long-term care facilities, hospitals, home-based car
https://en.wikipedia.org/wiki/IDEX%20Biometrics
IDEX Biometrics ASA () is a Norwegian biometrics company, specialising in fingerprint imaging and fingerprint recognition technology. The company was founded in 1996 and is headquartered in Oslo, but its main operation is in the US, with offices in New York and Massachusetts. The company also has offices in the UK and China. IDEX offers fingerprint sensor and biometric software for identity cards, banking cards, smart cards, access control, healthcare, IOT and other security solutions. Fingerprint recognition is one form of biometric identification, other examples being DNA, face recognition, iris recognition and retinal scan as well as identification based on behavioral patterns such as speaker recognition, keystroke dynamics and signature recognition. Technology In 2010, IDEX SmartFinger Film was launched, a thin and flexible fingerprint sensor that allows the entire fingerprint sensor system to be built into a standard plastic card, such as credit cards, bank cards and national ID cards. Such system-on-card solutions with a microprocessor running biometric algorithms and storage of biometric user data embedded ensure that biometric data never leaves the card itself. This safeguards privacy. In the fall of 2012, IDEX changed its market focus. Until then, the company had emphasized the use of the technology in cards. The shift came after Apple acquired IDEX's competitor AuthenTec. This acquisition was a harbinger that fingerprint readers would be built into mobile phones, IDEX believed, which was reiterated when Apple introduced its iPhone with a fingerprint reader in the fall of 2013. Since then, several mobile manufacturers followed Apple's example and launched mobile phones with fingerprint solutions. IDEX's SmartFinger Film sensor technology is based on polymer process technologies and offers small, ultra-thin and flexible swipe fingerprint sensors. IDEX holds early patents for low-cost capacitive fingerprint sensors and has a cross-licence with Apple rela
https://en.wikipedia.org/wiki/Geobacter%20daltonii
Geobacter daltonii is a Gram-negative, Fe(III)- and Uranium(IV)-reducing and non-spore-forming bacterium from the genus of Geobacter. It was isolated from sediments from the Oak Ridge Field Research Center in Oak Ridge, Tennessee in the United States. The specific epithet "daltonii" was refers to Dava Dalton, who performed the initial isolation of the strain, but died shortly thereafter. Characteristics Geobacter species are known for their ability to facilitate extracellular electron transfer. A feature of G. daltonii specifically is the pili structures that are electrically conductive allowing for connections to other cells, free minerals in their environment, and other electrodes. This may have implications for the utilization of G. daltonii as a tool in environmental remediation of U(VI). In 2022 a proposal was made to reclassify organisms in the Deltaproteobacteria class, including G. daltonii. See also List of bacterial orders List of bacteria genera
https://en.wikipedia.org/wiki/Thomas%20Pierson
Thomas Pierson (March 22, 1950 – February 20, 2014) was founder and CEO of the SETI Institute (search for extraterrestrial intelligence), a non-profit institute conducting research in Astrobiology. Early life and education Tom Pierson was raised and educated in the public schools of Norman, Oklahoma. He attended the University of Oklahoma where he received a bachelor's degree in business administration with dual majors in management and accounting. In 1974 he was recruited by then new Sonoma State University to lead the establishment of a University Foundation for faculty research. This effort led to several leadership roles in the California State University system-wide Auxiliary Organizations Association. and eventually to an appointment as associate director of the much larger Foundation at San Francisco State University (SFSU) where, for almost nine years, he led the research development and administration programs of the university. While at SFSU, he earned an MBA degree in 1981, writing a thesis on the effect of differing leadership styles in higher education management. Involvement in SETI During this period at SFSU he assisted an adjunct faculty member (Professor Charles Seeger ) in obtaining research funding for NASA's fledgling SETI research program (Search for Extraterrestrial Intelligence) and through this introduction met many of the early SETI researchers. At that time, a good deal of NASA's research on SETI was conducted through grants to universities, with the university personnel performing the work in NASA facilities. In 1984, encouraged by scientists such as Drs. Barney Oliver, John Billingham, and Jill Tarter, Pierson looked for a way to more efficiently use SETI funding. Later that year, an informal social gathering of SETI scientists, he presented his solution: to develop a non-profit research organization that could serve as an institutional home for scientists and engineers interested in devoting their careers to the study of life in th
https://en.wikipedia.org/wiki/Metal%20bellows
Metal bellows are elastic vessels that can be compressed when pressure is applied to the outside of the vessel, or extended under vacuum. When the pressure or vacuum is released, the bellows will return to its original shape, provided the material has not been stressed past its yield strength. They are used both for their ability to deform under pressure and to provide a hermetic seal that allows movement. Precision bellows technology of the 20th and 21st century is centered on metal bellows with less demanding applications using ones made of rubber and plastic. These products bear little resemblance to the original leather bellows used traditionally in fireplaces and forges. Types There are three main types of metal bellows: formed, welded and electroformed. Formed bellows are produced by reworking tubes, normally produced by deep drawing, with a variety of processes, including cold forming (rolling), and hydroforming. They are also called convoluted bellows or sylphons. Welded bellows (also called edge-welded, or diaphragm bellows) are manufactured by welding a number of individually formed diaphragms to each other. The comparison between the two bellows types generally centers on cost and performance. Hydroformed bellows generally have a high tooling cost, but, when mass-produced, may have a lower piece price. However, hydroformed bellows have lower performance characteristics due to relatively thick walls and high stiffness. Welded metal bellows are produced with a lower initial tooling cost and maintain higher performance characteristics. The drawback of welded bellows is the reduced metal strength at weld joints, caused by the high temperature of welding. Electroformed bellows are produced by plating (electroforming) a metal layer onto a model (mandrel), and subsequently removing the mandrel. They can be produced with modest tooling costs and with thin walls (25 micrometres or less), providing such bellows with high sensitivity and precision in many exac
https://en.wikipedia.org/wiki/CCDC176
Basal body-orientation factor 1 (BBOF1) is a protein that in humans is encoded by the gene CCDC176, which is located on the plus strand of chromosome 14 at 14q24.3. CCDC176 is neighbored by ALDH6A1 and ENTPD5 at the same locus. The mRNA is 3123 base pairs long and has 12 exons, the protein is 529 amino acids long and has a molecular weight of 61987 Da and a predicted isoelectric point of 9.07 in humans. Homology and evolution CCDC176 has no known paralogs and is orthologous in primates, mammals, birds, reptiles, amphibians, fish, all the way back to invertebrates, a fungal parasite and a proteobacteria. The domain found to be homologous is the DUF4515, a domain of unknown function. Protein function and characteristics This basal body protein has been shown in multiciliated cells to align and maintain cilia orientation in response to flow. This protein may also act by mediating a maturation step that stabilizes and aligns cilia orientation. No other genes or proteins have been found that encode basal body orientation factors. A similar set of genes, tubulin tyrosine ligase-like genes 3 and 6, has been found in zebrafish that maintain cilia structure and motility. These genes belong to the TTL (tubulin tyrosine ligase) family. BBOF1 has two coiled coil domains, one that is 117 amino acids in length at the position 85-201 and the second is 91 amino acids in length at the position 271-361. There is also a region of interest located at the position 77-270 and is named DUF4515, a domain of unknown function belonging to the family of pfam14988. There are three predicted protein-protein interactions concerning CCDC176. The most prevalent and most likely interaction is with LIG4, a human gene that encodes the protein DNA Ligase IV. Two experiments in a publication of 1030 unique reactions support the LIG4-CCDC176 interaction. The second and third predicted interactions are NRF1 and HYLS1. The predicted secondary structure of BBOF1 in humans is as follows: 87.1% alp
https://en.wikipedia.org/wiki/Way%20of%20the%20Tiger
The Way of the Tiger is a series of adventure gamebooks by Mark Smith and Jamie Thomson, originally published by Knight Books (an imprint of Hodder & Stoughton) from 1985. They are set on the fantasy world of Orb. The reader takes the part of a young monk/ninja, named Avenger, initially on a quest to avenge his foster father's murder and recover stolen scrolls. Later books presented other challenges for Avenger to overcome, most notably taking over and ruling a city. The world of Orb was originally created by Mark Smith for a Dungeons & Dragons game he ran while a pupil at Brighton College in the mid-1970s. Orb was also used as the setting for the 1984 Fighting Fantasy gamebook Talisman of Death, and one of the settings in the 1985 Falcon gamebook Lost in Time, both by Smith and Thomson. Each book has a disclaimer at the front against performing any of the ninja related feats in the book as "They could lead to serious injury or death to an untrained user". The sixth book, Inferno!, ends on a cliffhanger with Avenger trapped in the web of the Black Widow, Orb's darkest blight. As no new books were released, the fate of Avenger and Orb was unknown. Mark Smith has confirmed that the cliffhanger ending was deliberate. In August 2013, the original creators of the series were working with Megara Entertainment to develop re-edited hardcover collector editions of the gamebooks (including a new prequel (Book 0) and sequel (book 7)), and potentially a role-playing game based on the series. The two new books plus the six re-edited original books were reprinted in paperback format by Megara Entertainment in 2014, and made available as PDFs in 2019. Books and publication history The original series comprises six books: Avenger! (1985) Assassin! (1985) Usurper! (1985) Overlord! (1986) Warbringer! (1986) Inferno! (1987) The sixth book ended on a cliffhanger, which was not resolved until 27 years later. Interviewed in 2012, Mark Smith explained: "Our publishers Hodder
https://en.wikipedia.org/wiki/Thirring%20model
The Thirring model is an exactly solvable quantum field theory which describes the self-interactions of a Dirac field in (1+1) dimensions. Definition The Thirring model is given by the Lagrangian density where is the field, g is the coupling constant, m is the mass, and , for , are the two-dimensional gamma matrices. This is the unique model of (1+1)-dimensional, Dirac fermions with a local (self-)interaction. Indeed, since there are only 4 independent fields, because of the Pauli principle, all the quartic, local interactions are equivalent; and all higher power, local interactions vanish. (Interactions containing derivatives, such as , are not considered because they are non-renormalizable.) The correlation functions of the Thirring model (massive or massless) verify the Osterwalder–Schrader axioms, and hence the theory makes sense as a quantum field theory. Massless case The massless Thirring model is exactly solvable in the sense that a formula for the -points field correlation is known. Exact solution After it was introduced by Walter Thirring, many authors tried to solve the massless case, with confusing outcomes. The correct formula for the two and four point correlation was finally found by K. Johnson; then C. R. Hagen and B. Klaiber extended the explicit solution to any multipoint correlation function of the fields. Massive Thirring model, or MTM The mass spectrum of the model and the scattering matrix was explicitly evaluated by Bethe Ansatz. An explicit formula for the correlations is not known. J. I. Cirac, P. Maraner and J. K. Pachos applied the massive Thirring model to the description of optical lattices. Exact solution In one space dimension and one time dimension the model can be solved by the Bethe Ansatz. This helps one calculate exactly the mass spectrum and scattering matrix. Calculation of the scattering matrix reproduces the results published earlier by Alexander Zamolodchikov. The paper with the exact solution of Massive Thirr
https://en.wikipedia.org/wiki/Zuckerkandl%27s%20tubercle%20%28thyroid%20gland%29
Zuckerkandl's tubercle is a pyramidal extension of the thyroid gland, present at the most posterior side of each lobe. Emil Zuckerkandl described it in 1902 as the processus posterior glandulae thyreoideae. Although the structure is named after Zuckerkandl, it was discovered first by Otto Madelung in 1867 as the posterior horn of the thyroid. The structure is important in thyroid surgery as it is closely related to the recurrent laryngeal nerve, the inferior thyroid artery, Berry's ligament and the parathyroid glands. The structure is subject to an important amount of anatomic variation, and therefore a size classification is proposed by Pelizzo et al.
https://en.wikipedia.org/wiki/Fishing%20industry%20in%20Scotland
The fishing industry in Scotland comprises a significant proportion of the United Kingdom fishing industry. A recent inquiry by the Royal Society of Edinburgh found fishing to be of much greater social, economic and cultural importance to Scotland than it is relative to the rest of the UK. Scotland has just 8.4 per cent of the UK population but lands at its ports over 60 per cent of the total catch in the UK. Many of these are ports in relatively remote communities such as Kinlochbervie and Lerwick, which are scattered along an extensive coastline and which, for centuries, have looked to fishing as the main source of employment. Restrictions imposed under the Common Fisheries Policy (CFP) affect all European fishing fleets, but they have proved particularly severe in recent years for the demersal fish or whitefish sector (boats mainly fishing for cod, haddock and whiting) of the Scottish fishing industry. Fishing areas The main fishing areas are the North Sea and the seas west of Scotland. Historical development Fish have been recognised as a major food source from the earliest times. Fishing was important to the earliest settlers in Scotland, around 7000 BC. At this stage, fishing was a subsistence activity, undertaken only to feed the fisher and their immediate community. By the medieval period, salmon and herring were important resources and were exported to continental Europe, and the towns of the Hanseatic League in particular. As the industry developed, "fishertouns" and villages sprang up to supply the growing towns and fishing became more specialised. The many religious houses in Scotland acted as a spur to fisheries, granting exclusive fishing rights and demanding part of their tithes in fish. In the early 19th century, the British Government began to subsidise the catches of herring boats larger than 60 tons, plus an additional bounty on all herring sold abroad. This, coupled with the coming of the railways as a means of more rapid transport, gave a
https://en.wikipedia.org/wiki/Superadditivity
In mathematics, a function is superadditive if for all and in the domain of Similarly, a sequence is called superadditive if it satisfies the inequality for all and The term "superadditive" is also applied to functions from a boolean algebra to the real numbers where such as lower probabilities. Examples of superadditive functions The map is a superadditive function for nonnegative real numbers because the square of is always greater than or equal to the square of plus the square of for nonnegative real numbers and : The determinant is superadditive for nonnegative Hermitian matrix, that is, if are nonnegative Hermitian then This follows from the Minkowski determinant theorem, which more generally states that is superadditive (equivalently, concave) for nonnegative Hermitian matrices of size : If are nonnegative Hermitian then Horst Alzer proved that Hadamard's gamma function is superadditive for all real numbers with Mutual information Properties If is a superadditive function whose domain contains then To see this, take the inequality at the top: Hence The negative of a superadditive function is subadditive. Fekete's lemma The major reason for the use of superadditive sequences is the following lemma due to Michael Fekete. Lemma: (Fekete) For every superadditive sequence the limit is equal to the supremum (The limit may be positive infinity, as is the case with the sequence for example.) The analogue of Fekete's lemma holds for subadditive functions as well. There are extensions of Fekete's lemma that do not require the definition of superadditivity above to hold for all and There are also results that allow one to deduce the rate of convergence to the limit whose existence is stated in Fekete's lemma if some kind of both superadditivity and subadditivity is present. A good exposition of this topic may be found in Steele (1997). See also
https://en.wikipedia.org/wiki/Bertrand%E2%80%93Edgeworth%20model
In microeconomics, the Bertrand–Edgeworth model of price-setting oligopoly looks at what happens when there is a homogeneous product (i.e. consumers want to buy from the cheapest seller) where there is a limit to the output of firms which are willing and able to sell at a particular price. This differs from the Bertrand competition model where it is assumed that firms are willing and able to meet all demand. The limit to output can be considered as a physical capacity constraint which is the same at all prices (as in Edgeworth's work), or to vary with price under other assumptions. History Joseph Louis François Bertrand (1822–1900) developed the model of Bertrand competition in oligopoly. This approach was based on the assumption that there are at least two firms producing a homogenous product with constant marginal cost (this could be constant at some positive value, or with zero marginal cost as in Cournot). Consumers buy from the cheapest seller. The Bertrand–Nash equilibrium of this model is to have all (or at least two) firms setting the price equal to marginal cost. The argument is simple: if one firm sets a price above marginal cost then another firm can undercut it by a small amount (often called epsilon undercutting, where epsilon represents an arbitrarily small amount) thus the equilibrium is zero (this is sometimes called the Bertrand paradox). The Bertrand approach assumes that firms are willing and able to supply all demand: there is no limit to the amount that they can produce or sell. Francis Ysidro Edgeworth considered the case where there is a limit to what firms can sell (a capacity constraint): he showed that if there is a fixed limit to what firms can sell, then there may exist no pure-strategy Nash equilibrium (this is sometimes called the Edgeworth paradox). Martin Shubik developed the Bertrand–Edgeworth model to allow for the firm to be willing to supply only up to its profit maximizing output at the price which it set (under profit max
https://en.wikipedia.org/wiki/Disk%20pack
Disk packs and disk cartridges were early forms of removable media for computer data storage, introduced in the 1960s. Disk pack A disk pack is a layered grouping of hard disk platters (circular, rigid discs coated with a magnetic data storage surface). A disk pack is the core component of a hard disk drive. In modern hard disks, the disk pack is permanently sealed inside the drive. In many early hard disks, the disk pack was a removable unit, and would be supplied with a protective canister featuring a lifting handle. The protective cover consisted of two parts, a plastic shell, with a handle in the center, that enclosed the top and sides of the disks and a separate bottom that completed the sealed package. To remove the disk pack, the drive would be taken off line and allowed to spin down. Its access door could then be opened and an empty shell inserted and twisted to unlock the disk platter from the drive and secure it to the shell. The assembly would then be lifted out and the bottom cover attached. A different disk pack could then be inserted by removing the bottom and placing the disk pack with its shell into the drive. Turning the handle would lock the disk pack in place and free the shell for removal. The first removable disk pack was invented in 1961 by IBM engineers R. E. Pattison as part of the LCF (Low Cost File) project headed by Jack Harker. The 14-inch (356 mm) diameter disks introduced by IBM became a de facto standard, with many vendors producing disk drives using 14-inch disks in disk packs and cartridges into the 1980s. Examples of disk drives that employed removable disk packs include the IBM 1311, IBM 2311, and the Digital RP04. Disk cartridge An early disk cartridge was a single hard disk platter encased in a protective plastic shell. When the removable cartridge was inserted into the cartridge drive peripheral device, the read/write heads of the drive could access the magnetic data storage surface of the platter through holes in the sh
https://en.wikipedia.org/wiki/Phase-field%20model
A phase-field model is a mathematical model for solving interfacial problems. It has mainly been applied to solidification dynamics, but it has also been applied to other situations such as viscous fingering, fracture mechanics, hydrogen embrittlement, and vesicle dynamics. The method substitutes boundary conditions at the interface by a partial differential equation for the evolution of an auxiliary field (the phase field) that takes the role of an order parameter. This phase field takes two distinct values (for instance +1 and −1) in each of the phases, with a smooth change between both values in the zone around the interface, which is then diffuse with a finite width. A discrete location of the interface may be defined as the collection of all points where the phase field takes a certain value (e.g., 0). A phase-field model is usually constructed in such a way that in the limit of an infinitesimal interface width (the so-called sharp interface limit) the correct interfacial dynamics are recovered. This approach permits to solve the problem by integrating a set of partial differential equations for the whole system, thus avoiding the explicit treatment of the boundary conditions at the interface. Phase-field models were first introduced by Fix and Langer, and have experienced a growing interest in solidification and other areas. Equations of the phase-field model Phase-field models are usually constructed in order to reproduce a given interfacial dynamics. For instance, in solidification problems the front dynamics is given by a diffusion equation for either concentration or temperature in the bulk and some boundary conditions at the interface (a local equilibrium condition and a conservation law), which constitutes the sharp interface model. A number of formulations of the phase-field model are based on a free energy function depending on an order parameter (the phase field) and a diffusive field (variational formulations). Equations of the model are then o
https://en.wikipedia.org/wiki/OCCAID
The Open Contributors Corporation for Advanced Internet Development (OCCAID) was a non-profit consortium that operated one of the largest IPv6 research networks in the world. It maintained both resale and facilities-based networks spanning 15,000 miles, with a presence in over 52 cities across 6 countries. This organisation no longer operates, what occurred to this organisation is unclear as there is very little information available for this organisation, apart from their official website. OCCAID facilitated collaboration between research communities and the carrier industry, serving as a testbed and proving ground for advanced Internet protocols. Most of its participants connected to the network using Ethernet connections in areas where OCCAID has last-mile network connections. OCCAID's primary collaboration activities had involved IPv6 and multicast protocols. External links Official site IPv6 Computer network organizations
https://en.wikipedia.org/wiki/Back%20end%20of%20line
The back end of line (BEOL) is the second portion of IC fabrication where the individual devices (transistors, capacitors, resistors, etc.) get interconnected with wiring on the wafer, the metalization layer. Common metals are copper and aluminum. BEOL generally begins when the first layer of metal is deposited on the wafer. BEOL includes contacts, insulating layers (dielectrics), metal levels, and bonding sites for chip-to-package connections. After the last FEOL step, there is a wafer with isolated transistors (without any wires). In BEOL part of fabrication stage contacts (pads), interconnect wires, vias and dielectric structures are formed. For modern IC process, more than 10 metal layers can be added in the BEOL. Steps of the BEOL: Silicidation of source and drain regions and the polysilicon region. Adding a dielectric (first, lower layer is pre-metal dielectric (PMD) – to isolate metal from silicon and polysilicon), CMP processing it Make holes in PMD, make a contacts in them. Add metal layer 1 Add a second dielectric, called the inter-metal dielectric (IMD) Make vias through dielectric to connect lower metal with higher metal. Vias filled by Metal CVD process. Repeat steps 4–6 to get all metal layers. Add final passivation layer to protect the microchip Before 1998, practically all chips used aluminium for the metal interconnection layers. The four metals with the highest electrical conductivity are silver with the highest conductivity, then copper, then gold, then aluminium. After BEOL there is a "back-end process" (also called post-fab), which is done not in the cleanroom, often by a different company. It includes wafer test, wafer backgrinding, die separation, die tests, IC packaging and final test. See also Front end of line Integrated circuit Phosphosilicate glass
https://en.wikipedia.org/wiki/Tautonym
A tautonym is a scientific name of a species in which both parts of the name have the same spelling, such as Rattus rattus. The first part of the name is the name of the genus and the second part is referred to as the specific epithet in the International Code of Nomenclature for algae, fungi, and plants and the specific name in the International Code of Zoological Nomenclature. Tautonymy (i.e., the usage of tautonymous names) is permissible in zoological nomenclature (see List of tautonyms for examples). In past editions of the zoological Code, the term tautonym was used, but it has now been replaced by the more inclusive "tautonymous names"; these include trinomial names such as Gorilla gorilla gorilla and Bison bison bison. For animals, a tautonym implicitly (though not always) indicates that the species is the type species of its genus. This can also be indicated by a species name with the specific epithet typus or typicus, although more commonly the type species is designated another way. Botanical nomenclature In the current rules for botanical nomenclature (which apply retroactively), tautonyms are explicitly prohibited. One example of a botanical tautonym is 'Larix larix'. The earliest name for the European larch is Pinus larix L. (1753) but Gustav Karl Wilhelm Hermann Karsten did not agree with the placement of the species in Pinus and decided to move it to Larix in 1880. His proposed name created a tautonym. Under rules first established in 1906, which are applied retroactively, Larix larix cannot exist as a formal name. In such a case either the next earliest validly published name must be found, in this case Larix decidua Mill. (1768), or (in its absence) a new epithet must be published. However, it is allowed for both parts of the name of a species to mean the same (pleonasm), without being identical in spelling. For instance, Arctostaphylos uva-ursi means bearberry twice, in Greek and Latin respectively; Picea omorika uses the Latin and Serbian t
https://en.wikipedia.org/wiki/Adduct
In chemistry, an adduct (; alternatively, a contraction of "addition product") is a product of a direct addition of two or more distinct molecules, resulting in a single reaction product containing all atoms of all components. The resultant is considered a distinct molecular species. Examples include the addition of sodium bisulfite to an aldehyde to give a sulfonate. It can be considered as a single product resulting from the direct combination of different molecules which comprises all atoms of the reactant molecules. Adducts often form between Lewis acids and Lewis bases. A good example is the formation of adducts between the Lewis acid borane and the oxygen atom in the Lewis bases, tetrahydrofuran (THF): or diethyl ether: . Many Lewis acids and Lewis bases reacting in the gas phase or in non-aqueous solvents to form adducts have been examined in the ECW model. Trimethylboron, trimethyltin chloride and bis(hexafluoroacetylacetonato)copper(II) are examples of Lewis acids that form adducts which exhibit steric effects. For example: trimethyltin chloride, when reacting with diethyl ether, exhibits steric repulsion between the methyl groups on the tin and the ethyl groups on oxygen. But when the Lewis base is tetrahydrofuran, steric repulsion is reduced. The ECW model can provide a measure of these steric effects. Compounds or mixtures that cannot form an adduct because of steric hindrance are called frustrated Lewis pairs. Adducts are not necessarily molecular in nature. A good example from solid-state chemistry is the adducts of ethylene or carbon monoxide of . The latter is a solid with an extended lattice structure. Upon formation of the adduct, a new extended phase is formed in which the gas molecules are incorporated (inserted) as ligands of the copper atoms within the structure. This reaction can also be considered a reaction between a base and a Lewis acid with the copper atom in the electron-receiving role and the pi electrons of the gas molecule in th
https://en.wikipedia.org/wiki/CoRR%20hypothesis
The CoRR hypothesis states that the location of genetic information in cytoplasmic organelles permits regulation of its expression by the reduction-oxidation ("redox") state of its gene products. CoRR is short for "co-location for redox regulation", itself a shortened form of "co-location (of gene and gene product) for (evolutionary) continuity of redox regulation of gene expression". CoRR was put forward explicitly in 1993 in a paper in the Journal of Theoretical Biology with the title "Control of gene expression by redox potential and the requirement for chloroplast and mitochondrial genomes". The central concept had been outlined in a review of 1992. The term CoRR was introduced in 2003 in a paper in Philosophical Transactions of the Royal Society entitled "The function of genomes in bioenergetic organelles". The problem Chloroplasts and mitochondria Chloroplasts and mitochondria are energy-converting organelles in the cytoplasm of eukaryotic cells. Chloroplasts in plant cells perform photosynthesis; the capture and conversion of the energy of sunlight. Mitochondria in both plant and animal cells perform respiration; the release of this stored energy when work is done. In addition to these key reactions of bioenergetics, chloroplasts and mitochondria each contain specialized and discrete genetic systems. These genetic systems enable chloroplasts and mitochondria to make some of their own proteins. Both the genetic and energy-converting systems of chloroplasts and mitochondria are descended, with little modification, from those of the free-living bacteria that these organelles once were. The existence of these cytoplasmic genomes is consistent with, and counts as evidence for, the endosymbiont hypothesis. Most genes for proteins of chloroplasts and mitochondria are, however, now located on chromosomes in the nuclei of eukaryotic cells. There they code for protein precursors that are made in the cytosol for subsequent import into the organelles. Why
https://en.wikipedia.org/wiki/Deceleration%20parameter
The deceleration parameter in cosmology is a dimensionless measure of the cosmic acceleration of the expansion of space in a Friedmann–Lemaître–Robertson–Walker universe. It is defined by: where is the scale factor of the universe and the dots indicate derivatives by proper time. The expansion of the universe is said to be "accelerating" if (recent measurements suggest it is), and in this case the deceleration parameter will be negative. The minus sign and name "deceleration parameter" are historical; at the time of definition was expected to be negative, so a minus sign was inserted in the definition to make positive in that case. Since the evidence for the accelerating universe in the 1998–2003 era, it is now believed that is positive therefore the present-day value is negative (though was positive in the past before dark energy became dominant). In general varies with cosmic time, except in a few special cosmological models; the present-day value is denoted . The Friedmann acceleration equation can be written as where the sum extends over the different components, matter, radiation and dark energy, is the equivalent mass density of each component, is its pressure, and is the equation of state for each component. The value of is 0 for non-relativistic matter (baryons and dark matter), 1/3 for radiation, and −1 for a cosmological constant; for more general dark energy it may differ from −1, in which case it is denoted or simply . Defining the critical density as and the density parameters , substituting in the acceleration equation gives where the density parameters are at the relevant cosmic epoch. At the present day is negligible, and if (cosmological constant) this simplifies to where the density parameters are present-day values; with ΩΛ + Ωm ≈ 1, and ΩΛ = 0.7 and then Ωm = 0.3, this evaluates to for the parameters estimated from the Planck spacecraft data. (Note that the CMB, as a high-redshift measurement, does not directly
https://en.wikipedia.org/wiki/Raphide
Raphides ( ; singular raphide or raphis) are needle-shaped crystals of calcium oxalate monohydrate (prismatic monoclinic crystals) or calcium carbonate as aragonite (dipyramidal orthorhombic crystals), found in more than 200 families of plants. Both ends are needle-like, but raphides tend to be blunt at one end and sharp at the other. Calcium oxalate in plants Many plants accumulate calcium oxalate crystals in response to surplus calcium, which is found throughout the natural environment. The crystals are produced in a variety of shapes. The crystal morphology depends on the taxonomic group of the plant. In one study of over 100 species, it was found that calcium oxalate accounted for 6.3% of plant dry weight. Crystal morphology and the distribution of raphides (in roots or leaves or tubers etc.) is similar in some taxa but different in others leaving possible opportunities for plant key characteristics and systematic identification; mucilage in raphide containing cells makes light microscopy difficult, though. Little is known about the mechanisms of sequestration or indeed the reason for accumulation of raphides but it is most likely as a defense mechanism against herbivory. It has also been suggested that in some cases raphides may help form plant skeletal structure. Raphides typically occur in parenchyma cells in aerial organs especially the leaves, and are generally confined to the mesophyll. As the leaf area increases, so does the number of raphides, the process starting in even young leaves. The first indications that the cell will contain crystals is shown when the cells enlarge with a larger nucleus. Raphides are found in specialized plant cells or crystal chambers called idioblasts. Electron micrographs have shown that raphide needle crystals are normally four sided or H-shaped (with a groove down both sides) or with a hexagonal cross section and some are barbed. Wattendorf (1976) suggested that all circular sectioned raphides, as visible in a light m
https://en.wikipedia.org/wiki/Holt%E2%80%93Oram%20syndrome
Holt–Oram syndrome (also called atrio-digital syndrome, atriodigital dysplasia, cardiac-limb syndrome, heart-hand syndrome type 1, HOS, ventriculo-radial syndrome) is an autosomal dominant disorder that affects bones in the arms and hands (the upper limbs) and often causes heart problems. The syndrome may include an absent radial bone in the forearm, an atrial septal defect in the heart, or heart block. It affects approximately 1 in 100,000 people. Presentation All people with Holt-Oram syndrome have, at least one, abnormal wrist bone, which can often only be detected by X-ray. Other bone abnormalities are associated with the syndrome. These vary widely in severity, and include a missing thumb, a thumb that looks like a finger, upper arm bones of unequal length or underdeveloped, partial or complete absence of bones in the forearm, and abnormalities in the collar bone or shoulder blade. Bone abnormalities may affect only one side of the body or both sides; if both sides are affected differently, the left side is usually affected more severely. About 75 percent of individuals with Holt–Oram syndrome also have congenital heart problems, with the most common being defects in the tissue wall between the upper chambers of the heart (atrial septal defect) or the lower chambers of the heart (ventricular septal defect). People with Holt–Oram syndrome may also have abnormalities in the electrical system that coordinates contractions of the heart chambers. Cardiac conduction disease can lead to slow heart rate (bradycardia); rapid, ineffective contraction of the heart muscles (fibrillation); and heart block. People with Holt-Oram syndrome may have only congenital heart defects, only cardiac conduction disease, both or neither. Genetics Mutations in the TBX5 gene cause Holt–Oram syndrome. The TBX5 gene produces a protein that is critical for the proper development of the heart and upper limbs before birth. Holt–Oram syndrome has an autosomal dominant pattern of inherita
https://en.wikipedia.org/wiki/Kernel%20function%20for%20solving%20integral%20equation%20of%20surface%20radiation%20exchanges
In physics and engineering, the radiative heat transfer from one surface to another is the equal to the difference of incoming and outgoing radiation from the first surface. In general, the heat transfer between surfaces is governed by temperature, surface emissivity properties and the geometry of the surfaces. The relation for heat transfer can be written as an integral equation with boundary conditions based upon surface conditions. Kernel functions can be useful in approximating and solving this integral equation. Governing equation The radiative heat exchange depends on the local surface temperature of the enclosure and the properties of the surfaces, but does not depend upon the media. Because media neither absorb, emit, nor scatter radiation. Governing equation of heat transfer between two surface Ai and Aj where is the wavelength of radiation rays, is the radiation intensity, is the emissivity, is the reflectivity, is the angle between the normal of the surface and radiation exchange direction, and is the azimuthal angle If the surface of the enclosure is approximated as gray and diffuse surface, and so the above equation can be written as after the analytical procedure where is the black body emissive power which is given as the function of temperature of the black body where is the Stefan–Boltzmann constant. Kernel function Kernel functions provide a way to manipulate data as though it were projected into a higher dimensional space, by operating on it in its original space. So that data in higher-dimensional space become more easily separable. Kernel function is also used in integral equation for surface radiation exchanges. Kernel function relates to both the geometry of the enclosure and its surface properties. Kernel function depends on geometry of the body. In above equation K(r,r′) is the kernel function for the integral, which for 3-D problems takes the following form where F assumes a value of one when the surface element I se
https://en.wikipedia.org/wiki/Pathogen%20avoidance
Pathogen avoidance, also referred to as, parasite avoidance or pathogen disgust, refers to the theory that the disgust response, in humans, is an adaptive system that guides behavior to avoid infection caused by parasites such as viruses, bacteria, fungi, protozoa, helminth worms, arthropods and social parasites. Pathogen avoidance is a psychological mechanism associated with the behavioral immune system. Pathogen avoidance has been discussed as one of the three domains of disgust which also include sexual and moral disgust. Evolutionary significance In nature, controlling or the avoidance of pathogens is an essential fitness strategy because disease-causing agents are ever-present. Pathogens reproduce rapidly at the expense of their hosts' fitness, this creates a coevolutionary arms race between pathogen transmission and host avoidance. For a pathogen to move to a new host, it must exploit regions of the body that serve as points of contact between current and future hosts such as the mouth, the skin, the anus and the genitals. To avoid the cost of infection, organisms require counteradaptations to prevent pathogen transmission, by defending entry points such as the mouth and skin and avoiding other individual's exit points and the substances exiting these points such as feces and sneeze droplets. Pathogen avoidance provides the first line of defense by physically avoiding conspecifics, other species, objects or locations that could increase vulnerability to pathogens. The pathogen avoidance theory of disgust predicts that behavior that reduces contact with pathogens, will have been under strong selection throughout the evolution of free-living organisms and should be prevalent throughout the Animalia kingdom. Compared to the alternative, facing the infectious threat, avoidance likely provides a reduction in exposure to pathogens and in energetic costs associated with activation of the physiological immune response. These behaviors are found throughout the anima
https://en.wikipedia.org/wiki/Henri%20Devaux
Henri Edgard Devaux (6 July 1862 – 14 March 1956) was a French botanist, biophysicist, and plant physiologist who worked on gas exchange and membranes. In his studies on thin films, he was one of the pioneers of surface chemistry and molecular biophysics. Devaux was born in Etaules and went to study pharmacy at the University of Bordeaux with a scholarship. He then went to the University of Sorbonne and received a doctorate in 1889 for work under Gaston Bonnier on gas exchange in plant tissues. In 1896 he noted the absorption of metal ions in the cell membranes of aquatic plants and examined ion exchange through the effects of sodium of potassium ion concentrations in the surroundings. He became the first chair of plant physiology at the University of Bordeaux in 1906 after spending time in the University of Dijon. From 1903 he began to take an interest in the physics of surfaces. He was a popular demonstrator and experimenter, and his demonstrations on oil monolayer films on water with talc, camphor and toy boats were popular. He also made use of mercury and with this approach, using a known amount of liquid, he was able to compute the surface area of circular mono-layers of various fats, oils and proteins and estimate molecular weights. Devaux was brought up in a religious Protestant upbringing in a family that included many generations who worked as sailors and farmers. He was disturbed by the death of his father in 1886 and began to examine the role of religion and his own science. From 1890 he became more religious and often wrote on his spiritual views in his laboratory notes and even some of his scientific works. He rejected Claude Bernard's idea for the separation of religion and science. He also rejected ideas of evolution and believed in Biblical Creation. In 1931, Devaux shared the Laura Leonard Prize with Agnes Pockels for their respective investigations of the properties of surface layers and surface films.
https://en.wikipedia.org/wiki/Statistical%20relational%20learning
Statistical relational learning (SRL) is a subdiscipline of artificial intelligence and machine learning that is concerned with domain models that exhibit both uncertainty (which can be dealt with using statistical methods) and complex, relational structure. Note that SRL is sometimes called Relational Machine Learning (RML) in the literature. Typically, the knowledge representation formalisms developed in SRL use (a subset of) first-order logic to describe relational properties of a domain in a general manner (universal quantification) and draw upon probabilistic graphical models (such as Bayesian networks or Markov networks) to model the uncertainty; some also build upon the methods of inductive logic programming. Significant contributions to the field have been made since the late 1990s. As is evident from the characterization above, the field is not strictly limited to learning aspects; it is equally concerned with reasoning (specifically probabilistic inference) and knowledge representation. Therefore, alternative terms that reflect the main foci of the field include statistical relational learning and reasoning (emphasizing the importance of reasoning) and first-order probabilistic languages (emphasizing the key properties of the languages with which models are represented). Canonical tasks A number of canonical tasks are associated with statistical relational learning, the most common ones being. collective classification, i.e. the (simultaneous) prediction of the class of several objects given objects' attributes and their relations link prediction, i.e. predicting whether or not two or more objects are related link-based clustering, i.e. the grouping of similar objects, where similarity is determined according to the links of an object, and the related task of collaborative filtering, i.e. the filtering for information that is relevant to an entity (where a piece of information is considered relevant to an entity if it is known to be relevant to a si
https://en.wikipedia.org/wiki/Epimorphosis
Epimorphosis is defined as the regeneration of a specific part of an organism in a way that involves extensive cell proliferation of somatic stem cells, dedifferentiation, and reformation, as well as blastema formation. Epimorphosis can be considered a simple model for development, though it only occurs in tissues surrounding the site of injury rather than occurring system-wide. Epimorphosis restores the anatomy of the organism and the original polarity that existed before the destruction of the tissue and/or a structure of the organism. Epimorphosis regeneration can be observed in both vertebrates and invertebrates such as the common examples: salamanders, annelids, and planarians. History Thomas Hunt Morgan, an evolutionary biologist who also worked with embryology, argued that limb and tissue reformation bore many similarities to embryonic development. Building off of the work of German embryologist Wilhelm Roux, who suggested regeneration was two cooperative but distinct pathways instead of one, Morgan named the two parts of the regenerative process epimorphosis and morphallaxis. Specifically, Morgan wanted epimorphosis to specify the process of entirely new tissues being regrown from an amputation or similar injury, with morphallaxis being coined to describe regeneration that did not use cell proliferation, such as in hydra. The key difference between the two forms of regeneration is that epimorphosis involves cellular proliferation and blastema formation, whereas morphallaxis does not. In vertebrates In vertebrates, epimorphosis relies on blastema formation to proliferate cells into the new tissue. Through studies involving zebrafish fins, the toetips of mice, and limb regeneration in axolotls, researchers at the Polish Academy of Sciences found evidence for epimorphosis occurring in a variety of vertebrates, including instances of mammal epimorphosis. Limb regeneration Limb regeneration occurs when a part of an organism is destroyed, and the organism m
https://en.wikipedia.org/wiki/Vectors%20in%20Three-dimensional%20Space
Vectors in Three-dimensional Space (1978) is a book concerned with physical quantities defined in "ordinary" 3-space. It was written by J. S. R. Chisholm, an English mathematical physicist, and published by Cambridge University Press. According to the author, such physical quantities are studied in Newtonian mechanics, fluid mechanics, theories of elasticity and plasticity, non-relativistic quantum mechanics, and many parts of solid state physics. The author further states that "the vector concept developed in two different ways: in a wide variety of physical applications, vector notation and techniques became, by the middle of this century, almost universal; on the other hand, pure mathematicians reduced vector algebra to an axiomatic system, and introduced wide generalisations of the concept of a three-dimensional 'vector space'." Chisholm explains that since these two developments proceeded largely independently, there is a need to show how one can be applied to the other. Summary Vectors in Three-Dimensional Space has six chapters, each divided into five or more subsections. The first on linear spaces and displacements including these sections: Introduction, Scalar multiplication of vectors, Addition and subtraction of vectors, Displacements in Euclidean space, Geometrical applications. The second on Scalar products and components including these sections: Scalar products, Linear dependence and dimension, Components of a vector, Geometrical applications, Coordinate systems. The third on Other products of vectors. The last three chapters round out Chisholm's integration of these two largely independent developments.
https://en.wikipedia.org/wiki/Sergio%20Albeverio
Sergio Albeverio (born 17 January 1939) is a Swiss mathematician and mathematical physicist working in numerous fields of mathematics and its applications. In particular he is known for his work in probability theory, analysis (including infinite dimensional, non-standard, and stochastic analysis), mathematical physics, and in the areas algebra, geometry, number theory, as well as in applications, from natural to social-economic sciences. He initiated (with Raphael Høegh-Krohn) a systematic mathematical theory of Feynman path integrals and of infinite dimensional Dirichlet forms and associated stochastic processes (with applications particularly in quantum mechanics, statistical mechanics and quantum field theory). He also gave essential contributions to the development of areas such as p-adic functional and stochastic analysis as well as to the singular perturbation theory for differential operators. Other important contributions concern constructive quantum field theory and representation theory of infinite dimensional groups. He also initiated a new approach to the study of galaxy and planets formation inspired by stochastic mechanics. Life and career Albeverio is the son of Olivetta Albeverio Brighenti (1910–1968) and Luigi (Gino) Albeverio (1905–1968). He grew up in Lugano, Switzerland. He is married to Solvejg Albeverio Manzoni (painter and writer) since 1970. They have a daughter, Mielikki Albeverio (dipl. socialsc.). Study of mathematics and physics at the ETH Zürich with a Diploma Thesis (1962) under the direction of Markus Fierz and David Ruelle, and a PhD Thesis (1966) under the direction of Res Jost and Markus Fierz. Assistant at ETH Zürich (1962–67), visiting lecturer at Imperial College (1967–68, R. F. Streater). Invitation by Irving Segal as co-worker (MIT, 1968–69), replaced by a year stay as teacher at Liceo Cantonale, Lugano, due to family reasons. Research Fellowship at Princeton University (1970–72, A. S. Wightman). Visiting Professorships
https://en.wikipedia.org/wiki/Hanner%27s%20inequalities
In mathematics, Hanner's inequalities are results in the theory of Lp spaces. Their proof was published in 1956 by Olof Hanner. They provide a simpler way of proving the uniform convexity of Lp spaces for p ∈ (1, +∞) than the approach proposed by James A. Clarkson in 1936. Statement of the inequalities Let f, g ∈ Lp(E), where E is any measure space. If p ∈ [1, 2], then The substitutions F = f + g and G = f − g yield the second of Hanner's inequalities: For p ∈ [2, +∞) the inequalities are reversed (they remain non-strict). Note that for the inequalities become equalities which are both the parallelogram rule.
https://en.wikipedia.org/wiki/Hosted%20service%20provider
A hosted service provider (xSP) is a business that delivers a combination of traditional IT functions such as infrastructure, applications (software as a service), security, monitoring, storage, web development, website hosting and email, over the Internet or other wide area networks (WAN). An xSP combines the abilities of an application service provider (ASP) and an Internet service provider (ISP). This approach enables customers to consolidate and outsource much of their IT needs for a predictable recurring fee. xSPs that integrate web publishing give customers a central repository to rapidly and efficiently distribute information and resources among employees, customers, partners and the general public. Hosted Service Providers benefit from economies of scale and operate on a one-to-many business model, delivering the same software and services to many customers at once. Customers are charged on a subscription basis. Services offered As defined by analyst Ovum. Repeatable business process-led services shared among several clients Remotely delivered application services using shared resources Infrastructure services (both remotely managed and/or hosted services spanning data centre services, managed servers and databases, performance monitoring, security services, storage services and business continuity) Web hosting- the provision of infrastructure and application services to support the hosting of Web sites. History Hard Corps, Inc., formed in December 1999 claimed the moniker 'xSP' and began using it in commerce prior to others. See also Web servers Managed services
https://en.wikipedia.org/wiki/Texas%20Instruments%20SN76489
The SN76489 Digital Complex Sound Generator (DCSG) is a TTL-compatible programmable sound generator chip from Texas Instruments. Its main application was the generation of music and sound effects in game consoles, arcade games and home computers (such as the TI-99/4A, BBC Micro, ColecoVision, IBM PCjr, Tomy Tutor, and Tandy 1000), competing with the similar General Instrument AY-3-8910. It contains: 3 square wave tone generators A wide range of frequencies 16 different volume levels 1 noise generator 2 types (white noise and periodic) 3 different frequencies 16 different volume levels Overview The SN76489 was originally designed to be used in the TI-99/4 computer, where it was first called the TMS9919 and later SN94624, and had a 500 kHz max clock input rate. Later, when it was sold outside of TI, it was renamed the SN76489, and a divide-by-8 was added to its clock input, increasing the max clock input rate to , to facilitate sharing a crystal for both NTSC colorburst and clocking the sound chip. A version of the chip without the divide-by-8 input was also sold outside of TI as the SN76494, which has a max clock input rate. Tone Generators The frequency of the square waves produced by the tone generators on each channel is derived from two factors: The speed of the external clock A 10-bit value provided in a control register for that channel (called N) Each channel's frequency is arrived at by dividing the external clock by 4 (or 32 depending on the chip variant), and then dividing the result by N. Thus the overall divider range is from 4 to 4096 (or 32 to 32768). At maximum clock input rate, this gives a frequency range of 122 Hz to 125 kHz. Or typically 108 Hz to 111.6 kHz, with an NTSC colorburst (~3.58 MHz) clock input – a range from roughly A2 (two octaves below middle A) to 5–6 times the generally accepted limits of human audio perception. Noise Generator The pseudorandom noise feedback is generated from an XNOR of bits 12 and 13 for feedback,
https://en.wikipedia.org/wiki/Insular%20dwarfism
Insular dwarfism, a form of phyletic dwarfism, is the process and condition of large animals evolving or having a reduced body size when their population's range is limited to a small environment, primarily islands. This natural process is distinct from the intentional creation of dwarf breeds, called dwarfing. This process has occurred many times throughout evolutionary history, with examples including dinosaurs, like Europasaurus and Magyarosaurus dacus, and modern animals such as elephants and their relatives. This process, and other "island genetics" artifacts, can occur not only on islands, but also in other situations where an ecosystem is isolated from external resources and breeding. This can include caves, desert oases, isolated valleys and isolated mountains ("sky islands"). Insular dwarfism is one aspect of the more general "island effect" or "Foster's rule", which posits that when mainland animals colonize islands, small species tend to evolve larger bodies (island gigantism), and large species tend to evolve smaller bodies. This is itself one aspect of island syndrome, which describes the differences in morphology, ecology, physiology and behaviour of insular species compared to their continental counterparts. Possible causes There are several proposed explanations for the mechanism which produces such dwarfism. One is a selective process where only smaller animals trapped on the island survive, as food periodically declines to a borderline level. The smaller animals need fewer resources and smaller territories, and so are more likely to get past the break-point where population decline allows food sources to replenish enough for the survivors to flourish. Smaller size is also advantageous from a reproductive standpoint, as it entails shorter gestation periods and generation times. In the tropics, small size should make thermoregulation easier. Among herbivores, large size confers advantages in coping with both competitors and predators, so a reduc
https://en.wikipedia.org/wiki/Br.%20Alfred%20Shields%20FSC%20Marine%20Biological%20Station
The Br. Alfred Shields FSC Marine Biological Station is the marine laboratory of De La Salle University. It is located in Sitio Matuod, Lian, Batangas near Talim Bay and also near Mt. Tikbalang. Most of undergraduate and graduate thesis/researches on marine science of the university are done in this facility. Dr. Wilfredo Roehl Y. Licuanan is the current director of the Station. History The marine station was named after Br. Alfred Shields FSC, the founder of DLSU's Biology Department. The land where the station is now located was donated by the Limjoco Family. The Marine Station is in the Municipality of Lian in Batangas, a province in the Southern part of Luzon. The Marine Station is near Talim Bay. It is also near a hill, Mt. Tikbalang. Facilities The station has basic laboratory and field facilities. These include SCUBA diving gear, tanks, and compressors as well as snorkels and masks; A small outrigger boat, a dry laboratory, reference collections of common marine organisms, computers and various communications and video equipment. Basic housing facilities for faculty and students are also available including two 10-bed dormitory rooms, and a small kitchen. Freshwater supply is provided from a deep well and a generator is available for emergency power. Faculty and students A resident scientist (a faculty member of the Biology Department of the university) may be available to supervise and assist in the day-to-day activities of the station. Common users over the past year are faculty and students of the Departments of Biology and Chemistry, as well as natural science students from De La Salle University-Dasmariñas, University of Santo Tomas, University of the Philippines, and the Ateneo de Manila University. Research and outreach Recent research and outreach activities conducted by the MBS include: a marine resource assessment of Cauayan, Negros Occidental, evaluation of coral reef conservation at Maricaban Strait, and various undergraduate and graduate th
https://en.wikipedia.org/wiki/Standard%20gravity
The standard acceleration of gravity or standard acceleration of free fall, often called simply standard gravity and denoted by or , is the nominal gravitational acceleration of an object in a vacuum near the surface of the Earth. It is a constant defined by standard as . This value was established by the 3rd General Conference on Weights and Measures (1901, CR 70) and used to define the standard weight of an object as the product of its mass and this nominal acceleration. The acceleration of a body near the surface of the Earth is due to the combined effects of gravity and centrifugal acceleration from the rotation of the Earth (but the latter is small enough to be negligible for most purposes); the total (the apparent gravity) is about 0.5% greater at the poles than at the Equator. Although the symbol is sometimes used for standard gravity, (without a suffix) can also mean the local acceleration due to local gravity and centrifugal acceleration, which varies depending on one's position on Earth (see Earth's gravity). The symbol should not be confused with , the gravitational constant, or g, the symbol for gram. The is also used as a unit for any form of acceleration, with the value defined as above; see g-force. The value of defined above is a nominal midrange value on Earth, originally based on the acceleration of a body in free fall at sea level at a geodetic latitude of 45°. Although the actual acceleration of free fall on Earth varies according to location, the above standard figure is always used for metrological purposes. In particular, since it is the ratio of the kilogram-force and the kilogram, its numeric value when expressed in coherent SI units is the ratio of the kilogram-force and the newton, two units of force. History Already in the early days of its existence, the International Committee for Weights and Measures (CIPM) proceeded to define a standard thermometric scale, using the boiling point of water. Since the boiling point varies with
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Norway
As a member of EFTA, Norway (NO) is not included in the Classification of Territorial Units for Statistics (NUTS), but in a similar classification used for coding statistical regions of countries that are not part of the EU but are candidate countries, potential candidates or EFTA countries. The three levels are: level 1 (equivalent to NUTS level 1): Norway level 2 (equivalent to NUTS level 2): 7 Regions level 3 (equivalent to NUTS level 3): 19 Counties The codes are as follows: NO0 Norway NO01 Oslo og Akershus NO011 Oslo NO012 Akershus NO02 Hedmark og Oppland NO021 Hedmark NO022 Oppland NO03 Sør-Østlandet NO031 Østfold NO032 Buskerud NO033 Vestfold NO034 Telemark NO04 Agder og Rogaland NO041 Aust-Agder NO042 Vest-Agder NO043 Rogaland NO05 Vestlandet NO051 Hordaland NO052 Sogn og Fjordane NO053 Møre og Romsdal NO06 Trøndelag NO061 Sør-Trøndelag NO062 Nord-Trøndelag NO07 Nord-Norge NO071 Nordland NO072 Troms NO073 Finnmark Below these levels, there are two LAU levels (LAU-1: economic regions; LAU-2: municipalities). The LAU codes of Norway can be downloaded here: See also Subdivisions of Norway ISO 3166-2 codes of Norway FIPS region codes of Norway
https://en.wikipedia.org/wiki/Society%20for%20Behavioral%20Neuroendocrinology
The Society for Behavioral Neuroendocrinology is an interdisciplinary scientific organization dedicated to the study of hormonal processes and neuroendocrine systems that regulate behavior. Publications SBN publishes the scientific journal Hormones and Behavior. External links Neuroscience organizations
https://en.wikipedia.org/wiki/Distributed%20Processing%20Technology
Distributed Processing Technology (DPT) was founded in 1977, in Maitland, Florida. DPT was an early pioneer in computer storage technology, popularizing the use of disk caching in the 1980s and 1990s. DPT was the first company to design, manufacture and sell microprocessor-based intelligent caching disk controllers to the OEM computer market. Prior to DPT, disk caching technology had been implemented in proprietary hardware in mainframe computing to improve the speed of disk access. DPT's products popularized the use of disk caching in the 1980s. According to Bill Brothers, Unix product manager at the Santa Cruz Operation (SCO), a computer operating system vendor, "The kind of performance those guys (DPT) produce is phenomenal. It's unlike any other product on the market." DPT was founded by Steve Goldman, who served as the President and Chief Executive Officer until DPT was acquired by Adaptec in November 1999. External links Floppy controller speeds access with cache Caching Disk Controller Relieves System Bottlenecks Disk Controller Unburdens Real Time Applications
https://en.wikipedia.org/wiki/TN3270%20Plus
TN3270 Plus is a terminal emulator for Microsoft Windows. It is used for connecting Windows PC users to IBM mainframe, IBM i and UNIX systems via TCP/IP. TN3270 Plus includes terminal emulation for 3270, 5250, VT100, VT220 and ANSI terminals. TN3270 Plus supports Windows 8, Server 2012, 7, Vista, Server 2008 and XP. Users can configure the desktop interface to the required specifications with keyboard mapping, color definition and customizable ASCII to EBCDIC translation tables.
https://en.wikipedia.org/wiki/Mac%20OS%20Gujarati
Mac OS Gujarati is a character set developed by Apple Inc. based on IS 13194:1991 (ISCII-91). Code page layout The following table shows the Mac OS Gujarati encoding. Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as Mac OS Roman. Byte pairs and ISCII-related features are described in the mapping file.
https://en.wikipedia.org/wiki/Professor%20of%20Physiology%20%28Cambridge%29
The Professorship of Physiology, also known as the Chair of Physiology (1883), is a chair at the University of Cambridge. In 2006, the Department of Physiology was merged with the Department of Anatomy to form the Department of Physiology, Development and Neuroscience where the chair is now based. List of Professors of Physiology Michael Foster (1883–1903) John Newport Langley (1903–1925) Joseph Barcroft (1926–1937) Edgar Adrian (1937–1951) Bryan Harold Cabot Matthews (1952–1973) Richard Darwin Keynes (1973–1987) Ian Michael Glynn (1986–1995) Roger Christopher Thomas (1996–2006) Ole Paulsen (2010–present) Physiology Faculty of Biology, University of Cambridge Physiology, *, Cambridge
https://en.wikipedia.org/wiki/Challenge%E2%80%93response%20authentication
In computer security, challenge–response authentication is a family of protocols in which one party presents a question ("challenge") and another party must provide a valid answer ("response") to be authenticated. The simplest example of a challenge–response protocol is password authentication, where the challenge is asking for the password and the valid response is the correct password. An adversary who can eavesdrop on a password authentication can then authenticate itself by reusing the intercepted password. One solution is to issue multiple passwords, each of them marked with an identifier. The verifier can then present an identifier, and the prover must respond with the correct password for that identifier. Assuming that the passwords are chosen independently, an adversary who intercepts one challenge–response message pair has no clues to help with a different challenge at a different time. For example, when other communications security methods are unavailable, the U.S. military uses the AKAC-1553 TRIAD numeral cipher to authenticate and encrypt some communications. TRIAD includes a list of three-letter challenge codes, which the verifier is supposed to choose randomly from, and random three-letter responses to them. For added security, each set of codes is only valid for a particular time period which is ordinarily 24 hours. A more interesting challenge–response technique works as follows. Say Bob is controlling access to some resource. Alice comes along seeking entry. Bob issues a challenge, perhaps "52w72y". Alice must respond with the one string of characters which "fits" the challenge Bob issued. The "fit" is determined by an algorithm agreed upon by Bob and Alice. (The correct response might be as simple as "63x83z", with the algorithm changing each character of the challenge using a Caesar cipher. In the real world, the algorithm would be much more complex.) Bob issues a different challenge each time, and thus knowing a previous correct response (
https://en.wikipedia.org/wiki/Kugel%E2%80%93Khomskii%20coupling
Kugel–Khomskii coupling describes a coupling between the spin and orbital degrees of freedom in a solid; it is named after the Russian physicists Kliment I. Kugel (Климент Ильич Кугель) and Daniel I. Khomskii (Daniil I. Khomskii, Даниил Ильич Хомский). The Hamiltonian used is:
https://en.wikipedia.org/wiki/Klein%E2%80%93Gordon%20equation
The Klein–Gordon equation (Klein–Fock–Gordon equation or sometimes Klein–Gordon–Fock equation) is a relativistic wave equation, related to the Schrödinger equation. It is second-order in space and time and manifestly Lorentz-covariant. It is a quantized version of the relativistic energy–momentum relation . Its solutions include a quantum scalar or pseudoscalar field, a field whose quanta are spinless particles. Its theoretical relevance is similar to that of the Dirac equation. Electromagnetic interactions can be incorporated, forming the topic of scalar electrodynamics, but because common spinless particles like the pions are unstable and also experience the strong interaction (with unknown interaction term in the Hamiltonian) the practical utility is limited. The equation can be put into the form of a Schrödinger equation. In this form it is expressed as two coupled differential equations, each of first order in time. The solutions have two components, reflecting the charge degree of freedom in relativity. It admits a conserved quantity, but this is not positive definite. The wave function cannot therefore be interpreted as a probability amplitude. The conserved quantity is instead interpreted as electric charge, and the norm squared of the wave function is interpreted as a charge density. The equation describes all spinless particles with positive, negative, and zero charge. Any solution of the free Dirac equation is, for each of its four components, a solution of the free Klein–Gordon equation. The Klein–Gordon equation does not form the basis of a consistent quantum relativistic one-particle theory. There is no known such theory for particles of any spin. For full reconciliation of quantum mechanics with special relativity, quantum field theory is needed, in which the Klein–Gordon equation reemerges as the equation obeyed by the components of all free quantum fields. In quantum field theory, the solutions of the free (noninteracting) versions of the original
https://en.wikipedia.org/wiki/Content%20Addressable%20File%20Store
The Content Addressable File Store (CAFS) was a hardware device developed by International Computers Limited (ICL) that provided a disk storage with built-in search capability. The motivation for the device was the discrepancy between the high speed at which a disk could deliver data, and the much lower speed at which a general-purpose processor could filter the data looking for records that matched a search condition. Development of CAFS started in ICL's Research and Advanced Development Centre under Gordon Scarrott in the late 1960s following research by George Coulouris and John Evans who had completed a field study at Imperial College and Queen Mary College on database systems and applications (Scarrott, 1995). Their study had revealed the potential for substantial performance improvements in large-scale database applications by the inclusion of search logic in the disk controller. In its initial form, the search logic was built into the disk head. A standalone CAFS device was installed with a few customers, including BT Directory Enquiries, during the 1970s. The device was subsequently productised and in 1982 was incorporated as a standard feature within ICL's 2900 series and Series 39 mainframes. By this stage, to reduce costs and to take advantage of increased hardware speeds, the search logic was incorporated into the disk controller. A query expressed in a high-level query language could be compiled into a search specification that was then sent to the disk controller for execution. Initially this capability was integrated into ICL's own Querymaster query language, which worked in conjunction with the IDMS database; subsequently it was integrated into the ICL VME port of the Ingres relational database. ICL received the Queen's Award for Technological Achievement for CAFS in 1985. One factor which limited the adoption of CAFS was that the device needed to know the layout of data on disk, and placed constraints on this layout. Integrating database product
https://en.wikipedia.org/wiki/Acoustic%20wayfinding
Acoustic wayfinding is the practice of using the auditory system to orient oneself and navigate physical space. It is commonly used by the visually impaired, allowing them to retain their mobility without relying on visual cues from their environment. Method Acoustic wayfinding involves using a variety of auditory cues to create a mental map of the surrounding environment. This can include a number of techniques: navigating by sounds from the natural environment, such as pedestrian crossing signals; echolocation, or creating sound waves (by tapping a cane or making clicking noises) to determine the location and size of surrounding objects; and memorizing the unique sounds in a given space to recognize it again later. For the visually impaired, these auditory cues become the primary substitute for visual information about the direction and distance of people and objects in their environment. However, there are a number of common obstacles to acoustic wayfinding techniques: noisy outdoor environments can challenge an individual's ability to identify useful sounds, while indoors, the architecture may not provide an acoustic response which is useful for orientation and destination. Among the most difficult environments to navigate for individuals who rely on acoustic wayfinding are crowded places like department stores, transit stations, and hotel lobbies, or open spaces like parking lots and parks, where distinct sound cues are lacking. This means that, in practice, individuals who navigate primarily by acoustic wayfinding must also rely on a number of other senses – including touch, smell, and residual sight – to supplement auditory cues. These different methods can be used in tandem. For example, visually impaired individuals often use a white cane, not only to physically locate obstacles in front of them, but also to acoustically get a sense of what those obstacles may be. By tapping the cane, they also create sound waves that help them to gauge the location
https://en.wikipedia.org/wiki/Aquatic%20plant%20management
Aquatic plant management involves the science and methodologies used to control invasive and non-invasive aquatic plant species in waterways. Methods used include spraying herbicide, biological controls, mechanical removal as well as habitat modification. Preventing the introduction of invasive species is ideal. Aquaculture has been a source of exotic and ultimately invasive species introductions such Oreochromis niloticus. Aquatic plants released from home fish tanks have also been an issue. Impact Aquatic weeds are obviously most economically problematic where humans and water touch each other. Water weeds reduce our capacity for hydroelectric generation, drinking water supply, industrial water supply, agricultural water supply, and recreational use of water bodies including recreational boating. Some weeds do this by increasing - rather than decreasing - the evaporation loss at the surface. Particular weeds and aquatic insects have a special relationship which makes the plants a source of insect pests. Organizations In Florida the Florida Fish and Wildlife Conservation Commission (FWC) has an aquatic plant management section. The State of Washington has an Aquatic Plant Management Program. The Aquatic Plant Management Society is an organization in the U.S. and published the Journal of Aquatic Plant Management. The City of Winter Park, Florida has a herbicide program. Species Invasive aquatic species include: Eichhornia crassipes (water hyacinth), invasive outside its native habitat in the Amazon Basin Hydrilla, invasive in North America Limnobium laevigatum, invasive in the U.S. Myriophyllum spicatum, invasive in North America Myriophyllum verticillatum, invasive in North America Monochoria vaginalis, invasive outside its native habitat in Asia and the Pacific Pistia Salvinia molesta Aquatic plant harvesting methods Harvesting methods Harvesting refers to anthropogenic removal of aquatic plants from their environment. Aquatic plant harvesting is often
https://en.wikipedia.org/wiki/Mind-controlled%20wheelchair
A mind-controlled wheelchair is a mind-machine interfacing device that uses thought (neural impulses) to command the motorised wheelchair's motion. The first such device to reach production was designed by Diwakar Vaish, Head of Robotics and Research at A-SET Training & Research Institutes. The wheelchair is of great importance to patients with locked-in syndrome (LIS), in which a patient is aware but cannot move or communicate verbally due to complete paralysis of nearly all voluntary muscles in the body except the eyes. Such wheelchairs can also be used in case of muscular dystrophy, a disease that weakens the musculoskeletal system and hampers locomotion (walking or moving). History The technology behind brain or mind control goes back to at least 2002, when researchers implanted electrodes into the brains of macaque monkeys, which enabled them to control a cursor on a computer screen. Similar techniques were able to control robotic arms and simple joysticks. In 2009, researchers at the University of South Florida developed a wheelchair-mounted robotic arm that captured the user's brain waves and converted them into robotic movements. The Brain-Computer Interface (BCI), which captures P-300 brain wave responses and converts them to actions, was developed by USF psychology professor Emanuel Donchin and colleagues. The P-300 brain signal serves a virtual "finger" for patients who cannot move, such as those with locked-in syndrome or those with Lou Gehrig's Disease (ALS). Technology Operation A mind-controlled wheelchair functions using a brain–computer interface: an electroencephalogram (EEG) worn on the user's forehead detects neural impulses that reach the scalp allowing the micro-controller on board to detect the user's thought process, interpret it, and control the wheelchair's movement. Functionality The A-SET wheelchair comes standard with many different types of sensors, like temperature sensors, sound sensors and an array of distance sensors which
https://en.wikipedia.org/wiki/Pseudorapidity
In experimental particle physics, pseudorapidity, , is a commonly used spatial coordinate describing the angle of a particle relative to the beam axis. It is defined as where is the angle between the particle three-momentum and the positive direction of the beam axis. Inversely, As a function of three-momentum , pseudorapidity can be written as where is the component of the momentum along the beam axis (i.e. the longitudinal momentum – using the conventional system of coordinates for hadron collider physics, this is also commonly denoted ). In the limit where the particle is travelling close to the speed of light, or equivalently in the approximation that the mass of the particle is negligible, one can make the substitution (i.e. in this limit, the particle's only energy is its momentum-energy, similar to the case of the photon), and hence the pseudorapidity converges to the definition of rapidity used in experimental particle physics: This differs slightly from the definition of rapidity in special relativity, which uses instead of . However, pseudorapidity depends only on the polar angle of the particle's trajectory, and not on the energy of the particle. One speaks of the "forward" direction in a hadron collider experiment, which refers to regions of the detector that are close to the beam axis, at high ; in contexts where the distinction between "forward" and "backward" is relevant, the former refers to the positive z-direction and the latter to the negative z-direction. In hadron collider physics, the rapidity (or pseudorapidity) is preferred over the polar angle because, loosely speaking, particle production is constant as a function of rapidity, and because differences in rapidity are Lorentz invariant under boosts along the longitudinal axis: they transform additively, similar to velocities in Galilean relativity. A measurement of a rapidity difference between particles (or if the particles involved are massless) is hence not dependent on the
https://en.wikipedia.org/wiki/Heilbronn%20triangle%20problem
In discrete geometry and discrepancy theory, the Heilbronn triangle problem is a problem of placing points in the plane, avoiding triangles of small area. It is named after Hans Heilbronn, who conjectured that, no matter how points are placed in a given area, the smallest triangle area will be at most inversely proportional to the square of the number of points. His conjecture was proven false, but the asymptotic growth rate of the minimum triangle area remains unknown. Definition The Heilbronn triangle problem concerns the placement of points within a shape in the plane, such as the unit square or the unit disk, for a given Each triple of points form the three vertices of a triangle, and among these triangles, the problem concerns the smallest triangle, as measured by area. Different placements of points will have different smallest triangles, and the problem asks: how should points be placed to maximize the area of the smallest More formally, the shape may be assumed to be a compact set in the plane, meaning that it stays within a bounded distance from the origin and that points are allowed to be placed on its boundary. In most work on this problem, is additionally a convex set of nonzero area. When three of the placed points lie on a line, they are considered as forming a degenerate triangle whose area is defined to be zero, so placements that maximize the smallest triangle will not have collinear triples of points. The assumption that the shape is compact implies that there exists an optimal placement of points, rather than only a sequence of placements approaching optimality. The number may be defined as the area of the smallest triangle in this optimal An example is shown in the figure, with six points in a unit square. These six points form different triangles, four of which are shaded in the figure. Six of these 20 triangles, with two of the shaded shapes, have area 1/8; the remaining 14 triangles have larger areas. This is the optimal placement
https://en.wikipedia.org/wiki/Shock%20synthesis
Shock synthesis is the process of complex organic chemical creation through high velocity impact on simple amino acids, theorized to take place when a comet strikes a planetary body, or through the shock-wave created by a thunder clap. Hyper-velocity impact shock of a typical comet ice mixture produced several amino acids after hydrolysis. These include equal amounts of D- and L-alanine, and the non-protein amino acids α-aminoisobutyric acid and isovaline as well as their precursors.
https://en.wikipedia.org/wiki/GC%20box
In molecular biology, a GC box, also known as a GSG box, is a distinct pattern of nucleotides found in the promoter region of some eukaryotic genes. The GC box is upstream of the TATA box, and approximately 110 bases upstream from the transcription initiation site. It has a consensus sequence GGGCGG which is position-dependent and orientation-independent. The GC elements are bound by transcription factors and have similar functions to enhancers. Some known GC box-binding proteins include Sp1, Krox/Egr, Wilms' tumor, MIGI, and CREA. The GC box is commonly the binding site for zinc finger proteins. An alpha helix section of the protein corresponds with a major groove in the DNA. Zinc-fingers bind to triplet base pair sequences, with residue 21 binding to the first base pair, residue 18 binding to the second base pair, and residue 15 binding to the third base pair. The triplet base pairs can either be a GGG or a GCG. If residue 18 is a histidine, it will bind to a G, and if residue 18 is a glutamate, it will bind to a C. GC box-binding zinc fingers have between 2 and 4 fingers, making them interact with base pair sequences that are 6 to 8 base pairs in length.
https://en.wikipedia.org/wiki/Acetarsol
Acetarsol (or acetarsone) is an anti-infective drug. It was first discovered in 1921 at Pasteur Institute by Ernest Fourneau, and sold under the brand name Stovarsol. It has been given in the form of suppositories. Acetarsol can be used to make arsthinol. It has been cancelled and withdrawn from the market since August 12th, 1997. Medical uses Acetarsol has been used for the treatment of diseases such as syphilis, amoebiasis, yaws, trypanosomiasisiasis and malaria. Acetarsol was used for the treatment of Trichomonas Vaginalis and Candida Albicans. In the oral form, acetarsol can be used for the treatment of intestinal amoebiasis. As a suppository, acetarsol was researched to be used for the treatment of proctitis. Mechanism of Action Although the mechanism of action is not fully known, acetarsol may bind to protein-containing sulfhydryl groups located in the parasite, which then creates lethal As-S bonds, which then kills the parasite. Chemistry and pharmacokinetics Acetarsol has the molecular formula N-acetyl-4-hydroxy-m-arsinillic acid, and it is a pentavalent arsenical compound with antiprotozoal and anthelmintic properties. The arsenic found in acetarsol is excreted mainly in urine. The level of arsenic after acetarsol administration reaches close to the toxic range in urine. Some reports indicate a remission of arsenic which can be physiologically dangerous. Toxicity Some reports indicate that acetarsol can produce effects in the eyes such as optic neuritis and optic atrophy.
https://en.wikipedia.org/wiki/Beam%20%28music%29
In musical notation, a beam is a horizontal or diagonal line used to connect multiple consecutive notes (and occasionally rests) to indicate rhythmic grouping. Only eighth notes (quavers) or shorter can be beamed. The number of beams is equal to the number of flags that would be present on an unbeamed note. Beaming refers to the conventions and use of beams. A primary beam connects a note group unbroken, while a secondary beam is interrupted or partially broken. Grouping Beam spans indicate rhythmic groupings, usually determined by the time signature. Therefore, beams do not usually cross bar lines or major subdivisions of bars. A single eighth note, or any faster note, is always stemmed with flags, while two or more are typically beamed in groups. In modern practice, beams may span across rests in order to make rhythmic groups clearer. In vocal music, beams were traditionally used only to connect notes sung to the same syllable. In modern practice it is more common to use standard beaming rules, while indicating multi-note syllables with slurs. Positioning Notes joined by a beam usually have all the stems pointing in the same direction (up or down). The average pitch of the notes is used to determine the direction – if the average pitch is below the middle staff-line, the stems and beams usually go above the notehead, otherwise they go below. The direction of beams usually follows the general direction of the notes it groups, slanting down if the notes go down, slanting up if the notes go up, and level if the first and last notes are the same. Feathered beaming Feathered beaming shows a gradual change in the speed of notes. It is shown with a primary straight beam and other diagonal secondary beams (that together resemble a feather, hence the name). These secondary beams suggest a gradual acceleration or deceleration from the first note value within the feathered beam to the last. (A beam getting wider from left to right shows acceleration.) The longest va
https://en.wikipedia.org/wiki/BSAFE
Dell BSAFE, formerly known as RSA BSAFE, is a FIPS 140-2 validated cryptography library, available in both C and Java. BSAFE was initially created by RSA Security, which was purchased by EMC and then, in turn, by Dell. When Dell sold the RSA business to Symphony Technology Group in 2020, Dell elected to retain the BSAFE product line. BSAFE was one of the most common encryption toolkits before the RSA patent expired in September 2000. It also contained implementations of the RCx ciphers, with the most common one being RC4. From 2004 to 2013 the default random number generator in the library was a NIST-approved RNG standard, widely known to be insecure from at least 2006, containing a kleptographic backdoor from the American National Security Agency (NSA), as part of its secret Bullrun program. In 2013 Reuters revealed that RSA had received a payment of $10 million to set the compromised algorithm as the default option. The RNG standard was subsequently withdrawn in 2014, and the RNG removed from BSAFE beginning in 2015. Cryptography backdoors Dual_EC_DRBG random number generator From 2004 to 2013, the default cryptographically secure pseudorandom number generator (CSPRNG) in BSAFE was Dual_EC_DRBG, which contained an alleged backdoor from NSA, in addition to being a biased and slow CSPRNG. The cryptographic community had been aware that Dual_EC_DRBG was a very poor CSPRNG since shortly after the specification was posted in 2005, and by 2007 it had become apparent that the CSPRNG seemed to be designed to contain a hidden backdoor for NSA, usable only by NSA via a secret key. In 2007, Bruce Schneier described the backdoor as "too obvious to trick anyone to use it." The backdoor was confirmed in the Snowden leaks in 2013, and it was insinuated that NSA had paid RSA Security US$10 million to use Dual_EC_DRBG by default in 2004, though RSA Security denied that they knew about the backdoor in 2004. The Reuters article which revealed the secret $10 million contract to us
https://en.wikipedia.org/wiki/Unifacial%20cambium
The unifacial cambium (pl. cambia or cambiums) produces cells to the interior of its cylinder. These cells differentiate into xylem tissue. Unlike the more common bifacial cambium found in later woody plants, the unifacial cambium does not produce phloem to its exterior. Also in contrast to the bifacial cambium, the unifacial cambium is unable to expand its circumference with anticlinal cell division. Cell elongation provides a limited amount of expansion. Unifacial cambium plant morphology and life cycles The unifacial cambium allowed plants to grow as tall as 50 metres. Lacking secondary phloem, unifacial cambium plants developed alternative strategies to long range nutrient transport. For example, the stems of lycophyte trees were covered in photosynthesizing leaf bases. Due to the limited capacity for circumference growth, unifacial cambium plants had very little wood compared to modern woody plants. Xylem tissue in unifacial cambium plants was particularly structurally efficient. Additional structural support was provided in lycophytes by a special periderm tissue in the outer cortex. Lycophyte trees exhibit determinate growth. These trees appear to have lived for most of their life cycle as a 'stump', establishing root networks underground, before shooting up rapidly, releasing spores, and dying shortly thereafter. External links 'Key innovations, convergence, and success: macroevolutionary lessons from plant phylogeny', article by Michael J. Donahue from Paleobiology 31(2), 2005 (pdf) Devonian Times: More About Lycopsids Plant physiology Plant anatomy
https://en.wikipedia.org/wiki/WENO%20methods
In numerical solution of differential equations, WENO (weighted essentially non-oscillatory) methods are classes of high-resolution schemes. WENO are used in the numerical solution of hyperbolic partial differential equations. These methods were developed from ENO methods (essentially non-oscillatory). The first WENO scheme was developed by Liu, Osher and Chan in 1994. In 1996, Guang-Sh and Chi-Wang Shu developed a new WENO scheme called WENO-JS. Nowadays, there are many WENO methods. See also High-resolution scheme ENO methods
https://en.wikipedia.org/wiki/Ekphonetic%20notation
Ekphonetic notation consists of symbols added to certain sacred texts, especially lectionary readings of Biblical texts, as a mnemonic device to assist in their cantillation. Ekphonetic notation can take a number of forms, and has been used in several Jewish and Christian plainchant traditions, but is most commonly associated with Byzantine chant. In many cases, the original meaning of ekphonetic neumes is obscure, and must be reconstructed by comparison with later notation. Joseph Huzaya introduced ekphonetic notation into Syriac in the early 6th century. See also Hebrew cantillation Musical notation
https://en.wikipedia.org/wiki/Pharmaceutical%20distribution
The distribution of medications has special drug safety and security considerations. Some drugs require cold chain management in their distribution. The industry uses track and trace technology, though the timings for implementation and the information required vary across different countries, with varying laws and standards. Regulation Because governments regulate access to drugs, governments control drug distribution and the drug supply chain more than trade for other goods. Distribution begins with the pharmaceutical industry manufacturing drugs. From there, intermediaries in the public sector, private sector, and non-governmental organizations acquire drugs to provide them to other intermediaries. Eventually, the drugs reach different classes of consumers who use them. Good distribution practice (GDP) is a quality warranty system, which includes requirements for purchase, receiving, storage and export of drugs intended for human consumption. It regulates the division and movement of pharmaceutical products from the premises of the manufacturer of medicinal products, or another central point, to the end user thereof, or to an intermediate point by means of various transport methods, via various storage and/or health establishments. Argentina In 2011, Argentina introduced a catalogue of drugs covered by its national drug traceability scheme, listing more than 3,000 drugs that require the placing of unique serial numbers and tamper-evident features on the secondary packaging. The drugs listed are recorded in real time in a central database managed by the National Administration of Drugs, Foods, Medical Devices of Argentina (ANMAT), Regulation 3683, which uses Global Location Numbers (GLNs) to identify the various actors in the supply chain. The purpose of this program is to actively limit the use of illegal drugs. Brazil The 2009 Brazilian Federal Law 11.903 and subsequent regulations of the National Agency for Sanitary Surveillance in Brazil (ANVISA) require
https://en.wikipedia.org/wiki/Hippo%20signaling%20pathway
The Hippo signaling pathway, also known as the Salvador-Warts-Hippo (SWH) pathway, is a signaling pathway that controls organ size in animals through the regulation of cell proliferation and apoptosis. The pathway takes its name from one of its key signaling components—the protein kinase Hippo (Hpo). Mutations in this gene lead to tissue overgrowth, or a "hippopotamus"-like phenotype. A fundamental question in developmental biology is how an organ knows to stop growing after reaching a particular size. Organ growth relies on several processes occurring at the cellular level, including cell division and programmed cell death (or apoptosis). The Hippo signaling pathway is involved in restraining cell proliferation and promoting apoptosis. As many cancers are marked by unchecked cell division, this signaling pathway has become increasingly significant in the study of human cancer. The Hippo pathway also has a critical role in stem cell and tissue specific progenitor cell self-renewal and expansion. The Hippo signaling pathway appears to be highly conserved. While most of the Hippo pathway components were identified in the fruit fly (Drosophila melanogaster) using mosaic genetic screens, orthologs to these components (genes that are related through speciation events and thus tend to retain the same function in different species) have subsequently been found in mammals. Thus, the delineation of the pathway in Drosophila has helped to identify many genes that function as oncogenes or tumor suppressors in mammals. Mechanism The Hippo pathway consists of a core kinase cascade in which Hpo phosphorylates (Drosophila) the protein kinase Warts (Wts). Hpo (MST1/2 in mammals) is a member of the Ste-20 family of protein kinases. This highly conserved group of serine/threonine kinases regulates several cellular processes, including cell proliferation, apoptosis, and various stress responses. Once phosphorylated, Wts (LATS1/2 in mammals) becomes active. Misshapen (Msn, MAP4K4/6
https://en.wikipedia.org/wiki/Contourlet
Contourlets form a multiresolution directional tight frame designed to efficiently approximate images made of smooth regions separated by smooth boundaries. The contourlet transform has a fast implementation based on a Laplacian pyramid decomposition followed by directional filterbanks applied on each bandpass subband. Contourlet transform Introduction and motivation In the field of geometrical image transforms, there are many 1-D transforms designed for detecting or capturing the geometry of image information, such as the Fourier and wavelet transform. However, the ability of 1-D transform processing of the intrinsic geometrical structures, such as smoothness of curves, is limited in one direction, then more powerful representations are required in higher dimensions. The contourlet transform which was proposed by Do and Vetterli in 2002, is a new two-dimensional transform method for image representations. The contourlet transform has properties of multiresolution, localization, directionality, critical sampling and anisotropy. Its basic functions are multiscale and multidimensional. The contours of original images, which are the dominant features in natural images, can be captured effectively with a few coefficients by using contourlet transform. The contourlet transform is inspired by the human visual system and Curvelet transform which can capture the smoothness of the contour of images with different elongated shapes and in variety of directions. However, it is difficult to sampling on a rectangular grid for Curvelet transform since Curvelet transform was developed in continuous domain and directions other than horizontal and vertical are very different on rectangular grid. Therefore, the contourlet transform was proposed initially as a directional multiresolution transform in the discrete domain. Definition The contourlet transform uses a double filter bank structure to get the smooth contours of images. In this double filter bank, the Laplacian pyramid (L
https://en.wikipedia.org/wiki/DS80C390
The DS80C390 is a microcontroller, introduced by Dallas Semiconductor (now part of Maxim Integrated Products), whose architecture is derived from that of the Intel 8051 processor series. It contains a code memory address space of twenty-two bits. It also contains two Controller Area Network (CAN) controllers and a 32-bit integer coprocessor. The open-source Small Device C Compiler (SDCC) supports the processor. It was used in the initial version of the Tiny Internet Interface (TINI) processor module where it was superseded by the DS80C400, a processor that also incorporates an Ethernet port.
https://en.wikipedia.org/wiki/Homogeneous%20polynomial
In mathematics, a homogeneous polynomial, sometimes called quantic in older texts, is a polynomial whose nonzero terms all have the same degree. For example, is a homogeneous polynomial of degree 5, in two variables; the sum of the exponents in each term is always 5. The polynomial is not homogeneous, because the sum of exponents does not match from term to term. The function defined by a homogeneous polynomial is always a homogeneous function. An algebraic form, or simply form, is a function defined by a homogeneous polynomial. A binary form is a form in two variables. A form is also a function defined on a vector space, which may be expressed as a homogeneous function of the coordinates over any basis. A polynomial of degree 0 is always homogeneous; it is simply an element of the field or ring of the coefficients, usually called a constant or a scalar. A form of degree 1 is a linear form. A form of degree 2 is a quadratic form. In geometry, the Euclidean distance is the square root of a quadratic form. Homogeneous polynomials are ubiquitous in mathematics and physics. They play a fundamental role in algebraic geometry, as a projective algebraic variety is defined as the set of the common zeros of a set of homogeneous polynomials. Properties A homogeneous polynomial defines a homogeneous function. This means that, if a multivariate polynomial P is homogeneous of degree d, then for every in any field containing the coefficients of P. Conversely, if the above relation is true for infinitely many then the polynomial is homogeneous of degree d. In particular, if P is homogeneous then for every This property is fundamental in the definition of a projective variety. Any nonzero polynomial may be decomposed, in a unique way, as a sum of homogeneous polynomials of different degrees, which are called the homogeneous components of the polynomial. Given a polynomial ring over a field (or, more generally, a ring) K, the homogeneous polynomials of degree d
https://en.wikipedia.org/wiki/Label%20Rouge
Label Rouge (Red Label) is a sign of quality assurance in France as defined by Law No. 2006-11 (5 January 2006). Products eligible for the Label Rouge are food items, and non-food and unprocessed agricultural products such as flowers. According to the French Ministry of Agriculture: "The Red Label certifies that a product has a specific set of characteristics establishing a superior level to that of a similar current product". Certification To obtain the Label Rouge, a very stringent set of standards prepared by a group of producers must be approved. These standards establish the criteria which the product must meet throughout the production chain, including farming techniques, feed, processing and distribution. Approval is officially announced through a joint decree from the Minister for Agriculture and Fisheries and the Minister for Consumer Affairs, on the recommendations of the National Institute for Origin and Quality (INAO). INAO is the French public body responsible for quality and origin marks relating to food products.
https://en.wikipedia.org/wiki/Hydraulic%20roughness
Hydraulic roughness is the measure of the amount of frictional resistance water experiences when passing over land and channel features. One roughness coefficient is Manning's n-value. Manning's n is used extensively around the world to predict the degree of roughness in channels. Flow velocity is strongly dependent on the resistance to flow. An increase in this n value will cause a decrease in the velocity of water flowing across a surface. Manning's n The value of Manning's n is affected by many variables. Factors like suspended load, sediment grain size, presence of bedrock or boulders in the stream channel, variations in channel width and depth, and overall sinuosity of the stream channel can all affect Manning's n value. Biological factors have the greatest overall effect on Manning's n; bank stabilization by vegetation, height of grass and brush across a floodplain, and stumps and logs creating natural dams are the main observable influences. Biological Importance Recent studies have found a relationship between hydraulic roughness and salmon spawning habitat; “bed-surface grain size is responsive to hydraulic roughness caused by bank irregularities, bars, and wood debris… We find that wood debris plays an important role at our study sites, not only providing hydraulic roughness but also influencing pool spacing, frequency of textural patches, and the amplitude and wavelength of bank and bar topography and their consequent roughness. Channels with progressively greater hydraulic roughness have systematically finer bed surfaces, presumably due to reduced bed shear stress, resulting in lower channel competence and diminished bed load transport capacity, both of which promote textural fining”. Textural fining of stream beds can effect more than just salmon spawning habitats, “bar and wood roughness create a greater variety of textural patches, offering a range of aquatic habitats that may promote biologic diversity or be of use to specific animals at differe
https://en.wikipedia.org/wiki/Notch%20signaling%20pathway
The Notch signaling pathway is a highly conserved cell signaling system present in most animals. Mammals possess four different notch receptors, referred to as NOTCH1, NOTCH2, NOTCH3, and NOTCH4. The notch receptor is a single-pass transmembrane receptor protein. It is a hetero-oligomer composed of a large extracellular portion, which associates in a calcium-dependent, non-covalent interaction with a smaller piece of the notch protein composed of a short extracellular region, a single transmembrane-pass, and a small intracellular region. Notch signaling promotes proliferative signaling during neurogenesis, and its activity is inhibited by Numb to promote neural differentiation. It plays a major role in the regulation of embryonic development. Notch signaling is dysregulated in many cancers, and faulty notch signaling is implicated in many diseases, including T-cell acute lymphoblastic leukemia (T-ALL), cerebral autosomal-dominant arteriopathy with sub-cortical infarcts and leukoencephalopathy (CADASIL), multiple sclerosis, Tetralogy of Fallot, and Alagille syndrome. Inhibition of notch signaling inhibits the proliferation of T-cell acute lymphoblastic leukemia in both cultured cells and a mouse model. Discovery In 1914, John S. Dexter noticed the appearance of a notch in the wings of the fruit fly Drosophila melanogaster. The alleles of the gene were identified in 1917 by American evolutionary biologist Thomas Hunt Morgan. Its molecular analysis and sequencing was independently undertaken in the 1980s by Spyros Artavanis-Tsakonas and Michael W. Young. Alleles of the two C. elegans Notch genes were identified based on developmental phenotypes: lin-12 and glp-1. The cloning and partial sequence of lin-12 was reported at the same time as Drosophila Notch by Iva Greenwald. Mechanism The Notch protein spans the cell membrane, with part of it inside and part outside. Ligand proteins binding to the extracellular domain induce proteolytic cleavage and release of th
https://en.wikipedia.org/wiki/Multipurpose%20community%20telecenters
Multipurpose Community Telecenters are telecentre facilities, which provide public access to a variety of communication and information services, such as libraries and seminar rooms. They are promoted by many governments and organisations including the International Telecommunication Union. They are generally introduced to try to bring access to information and communication technologies in rural communities, but often find significant obstacles in the high cost of connectivity, low digital literacy in the community and high maintenance costs, and are thus forced to shut down. Out of 23 of the MCT's built in rural Mexico, only 5 were working 2 years later.
https://en.wikipedia.org/wiki/Interactome
In molecular biology, an interactome is the whole set of molecular interactions in a particular cell. The term specifically refers to physical interactions among molecules (such as those among proteins, also known as protein–protein interactions, PPIs; or between small molecules and proteins) but can also describe sets of indirect interactions among genes (genetic interactions). The word "interactome" was originally coined in 1999 by a group of French scientists headed by Bernard Jacq. Mathematically, interactomes are generally displayed as graphs. Though interactomes may be described as biological networks, they should not be confused with other networks such as neural networks or food webs. Molecular interaction networks Molecular interactions can occur between molecules belonging to different biochemical families (proteins, nucleic acids, lipids, carbohydrates, etc.) and also within a given family. Whenever such molecules are connected by physical interactions, they form molecular interaction networks that are generally classified by the nature of the compounds involved. Most commonly, interactome refers to protein–protein interaction (PPI) network (PIN) or subsets thereof. For instance, the Sirt-1 protein interactome and Sirt family second order interactome is the network involving Sirt-1 and its directly interacting proteins where as second order interactome illustrates interactions up to second order of neighbors (Neighbors of neighbors). Another extensively studied type of interactome is the protein–DNA interactome, also called a gene-regulatory network, a network formed by transcription factors, chromatin regulatory proteins, and their target genes. Even metabolic networks can be considered as molecular interaction networks: metabolites, i.e. chemical compounds in a cell, are converted into each other by enzymes, which have to bind their substrates physically. In fact, all interactome types are interconnected. For instance, protein interactomes contain ma
https://en.wikipedia.org/wiki/Flow%20cytometry
Flow cytometry (FC) is a technique used to detect and measure physical and chemical characteristics of a population of cells or particles. In this process, a sample containing cells or particles is suspended in a fluid and injected into the flow cytometer instrument. The sample is focused to ideally flow one cell at a time through a laser beam, where the light scattered is characteristic to the cells and their components. Cells are often labeled with fluorescent markers so light is absorbed and then emitted in a band of wavelengths. Tens of thousands of cells can be quickly examined and the data gathered are processed by a computer. Flow cytometry is routinely used in basic research, clinical practice, and clinical trials. Uses for flow cytometry include: Cell counting Cell sorting Determining cell characteristics and function Detecting microorganisms Biomarker detection Protein engineering detection Diagnosis of health disorders such as blood cancers Measuring genome size A flow cytometry analyzer is an instrument that provides quantifiable data from a sample. Other instruments using flow cytometry include cell sorters which physically separate and thereby purify cells of interest based on their optical properties. History The first impedance-based flow cytometry device, using the Coulter principle, was disclosed in U.S. Patent 2,656,508, issued in 1953, to Wallace H. Coulter. Mack Fulwyler was the inventor of the forerunner to today's flow cytometers - particularly the cell sorter. Fulwyler developed this in 1965 with his publication in Science. The first fluorescence-based flow cytometry device (ICP 11) was developed in 1968 by Wolfgang Göhde from the University of Münster, filed for patent on 18 December 1968 and first commercialized in 1968/69 by German developer and manufacturer Partec through Phywe AG in Göttingen. At that time, absorption methods were still widely favored by other scientists over fluorescence methods. Soon after, flow cytometr
https://en.wikipedia.org/wiki/CryptoParty
CryptoParty (Crypto-Party) is a grassroots global endeavour to introduce the basics of practical cryptography such as the Tor anonymity network, I2P, Freenet, key signing parties, disk encryption and virtual private networks to the general public. The project primarily consists of a series of free public workshops. History As a successor to the Cypherpunks of the 1990s, CryptoParty was conceived in late August 2012 by the Australian journalist Asher Wolf in a Twitter post following the passing of the Cybercrime Legislation Amendment Bill 2011 and the proposal of a two-year data retention law in that country, the Cybercrime Legislation Amendment Bill 2011. The DIY, self-organizing movement immediately went viral, with a dozen autonomous CryptoParties being organized within hours in cities throughout Australia, the US, the UK, and Germany. Many more parties were soon organized or held in Chile, The Netherlands, Hawaii, Asia, etc. Tor usage in Australia itself spiked, and CryptoParty London with 130 attendees—some of whom were veterans of the Occupy London movement—had to be moved from London Hackspace to the Google campus in east London's Tech City. As of mid-October 2012 some 30 CryptoParties have been held globally, some on a continuing basis, and CryptoParties were held on the same day in Reykjavik, Brussels, and Manila. The first draft of the 442-page CryptoParty Handbook (the hard copy of which is available at cost) was pulled together in three days using the book sprint approach, and was released 2012-10-04 under a CC BY-SA license. Edward Snowden involvement In May 2014, Wired reported that Edward Snowden, while employed by Dell as an NSA contractor, organized a local CryptoParty at a small hackerspace in Honolulu, Hawaii on December 11, six months before becoming well known for leaking tens of thousands of secret U.S. government documents. During the CryptoParty, Snowden taught 20 Hawaii residents how to encrypt their hard drives and use the Internet an
https://en.wikipedia.org/wiki/Glycine%20transporter
Glycine transporters (GlyTs) are plasmalemmal neurotransmitter transporters. They serve to terminate the signaling of glycine by mediating its reuptake from the synaptic cleft back into the presynaptic neurons. There are two glycine transporters: glycine transporter 1 (GlyT1) and glycine transporter 2 (GlyT2). See also Excitatory amino acid transporter GABA transporter Glycine receptor Glycine reuptake inhibitor
https://en.wikipedia.org/wiki/Lennart%20Ljung%20%28engineer%29
Lennart Ljung is a Swedish professor in the Chair of Control Theory at Linköping University since 1976. He is known for his pioneering research in system identification, and is regarded as a leading researcher in control theory. Education Lennart Ljung received the B.A. in Russian Language and Mathematics 1967, the M.Sc. (first degree) in Engineering Physics 1970, and the Ph.D. in Control Theory 1974, all from Lund University. Background, Present Appointments and Scientific Contributions Following a position as a research associate at Stanford University 1974–1975 and a position as an associate professor (docent) in control theory at Lund University in 1975–1976, Lennart Ljung was elected as a professor in the chair of control theory at Linköping University in 1976. He was a visiting researcher at Stanford University 1980–1981 and at Massachusetts Institute of Technology 1985–1986. He was also Department Head of the Department of Electrical Engineering, Linköping University between 1981 and 1990. He is currently the Director of the MOVIII Strategic Research Center at Linköping University. He has made extensive contributions to control theory, particularly in the area of system identification. He has authored 10 books, over 150 international journal articles, over 200 international conference papers, and a widely used commercial software package for MATLAB called the System Identification Toolbox. Lennart Ljung is ranked 84 in the world (1st within Sweden) in the field of Engineering and Technology. Notable honors and awards IEEE Fellow (1985) Member of the Royal Swedish Academy of Engineering Sciences (1985) Member of the Swedish Academy of Sciences (1995) Doctor honoris causa Baltic State Technical University (1996) Doctor honoris causa Uppsala University (1998) Honorary Member of the Hungarian Academy of Engineering Sciences (2001) Giorgio Quazza Medal (2002) Hendrik W. Bode Lecture Prize, awarded by IEEE Control Systems Society (2003) Foreign A
https://en.wikipedia.org/wiki/Induced%20radioactivity
Induced radioactivity, also called artificial radioactivity or man-made radioactivity, is the process of using radiation to make a previously stable material radioactive. The husband-and-wife team of Irène Joliot-Curie and Frédéric Joliot-Curie discovered induced radioactivity in 1934, and they shared the 1935 Nobel Prize in Chemistry for this discovery. Irène Curie began her research with her parents, Marie Curie and Pierre Curie, studying the natural radioactivity found in radioactive isotopes. Irene branched off from the Curies to study turning stable isotopes into radioactive isotopes by bombarding the stable material with alpha particles (denoted α). The Joliot-Curies showed that when lighter elements, such as boron and aluminium, were bombarded with α-particles, the lighter elements continued to emit radiation even after the α−source was removed. They showed that this radiation consisted of particles carrying one unit positive charge with mass equal to that of an electron, now known as a positron. Neutron activation is the main form of induced radioactivity. It occurs when an atomic nucleus captures one or more free neutrons. This new, heavier isotope may be either stable or unstable (radioactive), depending on the chemical element involved. Because neutrons disintegrate within minutes outside of an atomic nucleus, free neutrons can be obtained only from nuclear decay, nuclear reaction, and high-energy interaction, such as cosmic radiation or particle accelerator emissions. Neutrons that have been slowed through a neutron moderator (thermal neutrons) are more likely to be captured by nuclei than fast neutrons. A less common form of induced radioactivity results from removing a neutron by photodisintegration. In this reaction, a high energy photon (a gamma ray) strikes a nucleus with an energy greater than the binding energy of the nucleus, which releases a neutron. This reaction has a minimum cutoff of 2 MeV (for deuterium) and around 10 MeV for most heav
https://en.wikipedia.org/wiki/ZeroPC
ZeroPC was a commercial webtop developed by ZeroDesktop, Inc. located in San Mateo, California. ZeroPC has been called a personal cloud OS. It mimicked the look, feel and functionality of the desktop environment of a real operating system. The software was launched in September 2011 through Disrupt SF 2011 event and recently selected to the finalist of SXSW 2012 in Innovative Web Technology category. ZeroPC is web-based and required a Java applet to operate bundled productivity tool Thinkfree. The web applications found on ZeroPC are built on Java in the back end. Features included drag-and-drop functionality, cloud dashboard and personal cloud storage meta services. ZeroPC belonged to a category of services that intended to turn the Web into a full-fledged platform by using Web services as a foundation along with presentation technologies that replicated the experience of desktop applications for users. ZeroPC aggregates content so users can easily access, transfer and share whatever content they want, using any internet browser from anywhere, anytime and from any device. Its meta-cloud layer supports Dropbox, Box, SugarSync, OneDrive, 4Shared, Google Drive, Evernote, Picasa, Flickr, Instagram, Facebook, Twitter, and Photobucket. ZeroPC Cloud OS platform also provides extensive APIs for iOS and Android App developers. Some of the features found on ZeroPC are: File sharing, Webmail, Cloud Content Navigator, Instant messenger, Sticky Note, Audio/Video Player and Office productivity applications. ZeroPC 2.0 platform ran on AWS for free and paid users. Its platform is licensable to Telco and ISV for commercial purpose. Their clients are SFR, SK Telecom, Hancom and others. As of June 1, 2017, ZeroPC's servers were switched off completely, and ZeroPC is no longer in service since its parent company, NComputing, had launched Virtual Desktop Service in the cloud (AWS) to public. Browser and Platform Compatibility The ZeroPC web desktop was compatible with Mac OS X
https://en.wikipedia.org/wiki/Fist%20and%20rose
The fist and rose, sometimes called the rose in the fist or fist with a rose, is an emblem used or formerly used by a number of socialist and social democratic parties around the world. It depicts a rose, symbolizing the promises of a better life under a socialist government, and a clenched fist holding it, symbolizing the activist commitment and solidarity necessary to achieve it. The rose is displayed in the red colour associated with left-wing politics; recent variants display the leafs in green, reflecting the rise of environmental concerns. Its design involves political symbolism drawn from the history of socialism and social democracy, and also alluding to the counterculture of the 1960s. The emblem was drawn in 1969 by the French graphic artist Marc Bonnet and became popular within the Socialist Party (PS), which made it its official logo in 1971. It was later used, with slight or large alterations and adaptations, by several parties elsewhere in Europe as well as in Africa, America, and Asia, although some have retired it since the end of the 20th century. In 1979, it was also taken up by the Socialist International (SI). It has often been chosen to provide an attractive visual alternative to the communist hammer and sickle, and to signal a party's affiliation to the SI and kinship with foreign left-wing parties. History and use France Creation and adoption The emblem was created in France within the Socialist Party (PS), at the time of its transformation from the prior SFIO at the Alfortville Congress (May 1969) and of its enlargement to the rest of the “non-communist left” at the Épinay Congress (June 1971). The emblem of the SFIO was the Three Arrows, a 1930s anti-fascist symbol, which was falling in disuse as the party wished to modernize. The Ceres, a left-leaning faction, had taken control of the PS Paris Federation and actively seek to change the party. The initiative for the emblem is frequently, although disputedly, credited to Didier Motchan
https://en.wikipedia.org/wiki/Drug%20pollution
Drug pollution or pharmaceutical pollution is pollution of the environment with pharmaceutical drugs and their metabolites, which reach the aquatic environment (groundwater, rivers, lakes, and oceans) through wastewater. Drug pollution is therefore mainly a form of water pollution. "Pharmaceutical pollution is now detected in waters throughout the world," said a scientist at the Cary Institute of Ecosystem Studies in Millbrook, New York. "Causes include aging infrastructure, sewage overflows and agricultural runoff. Even when wastewater makes it to sewage treatment facilities, they aren't equipped to remove pharmaceuticals." Sources and effects Most simply from the drugs having been cleared and excreted in the urine. The portion that comes from expired or unneeded drugs that are flushed unused down the toilet is smaller, but it is also important, especially in hospitals (where its magnitude is greater than in residential contexts). This includes drug molecules that are too small to be filtered out by existing water treatment plants. The process of upgrading existing plants to use advanced oxidation processes that are able to remove these molecules can be expensive. Drugs such as antidepressants have been found in the United States Great Lakes. Researchers from the University of Buffalo have found high traces of antidepressants in the brains of fish. Fish behavior on antidepressants have been noted to have similar impacts and reducing risk-averse behavior, and thereby reducing survival through predation. Other sources include agricultural runoff (because of antibiotic use in livestock) and pharmaceutical manufacturing. Drug pollution is implicated in the sex effects of water pollution. It is a suspected a contributor (besides industrial pollution) in fish kills, amphibian dieoffs, and amphibian pathomorphology. Pollution of water systems In the early 1990s, pharmaceuticals were found to be present in the environment, which resulted in massive scientific researc
https://en.wikipedia.org/wiki/3D%20rotation%20group
In mechanics and geometry, the 3D rotation group, often denoted SO(3), is the group of all rotations about the origin of three-dimensional Euclidean space under the operation of composition. By definition, a rotation about the origin is a transformation that preserves the origin, Euclidean distance (so it is an isometry), and orientation (i.e., handedness of space). Composing two rotations results in another rotation, every rotation has a unique inverse rotation, and the identity map satisfies the definition of a rotation. Owing to the above properties (along composite rotations' associative property), the set of all rotations is a group under composition. Every non-trivial rotation is determined by its axis of rotation (a line through the origin) and its angle of rotation. Rotations are not commutative (for example, rotating R 90° in the x-y plane followed by S 90° in the y-z plane is not the same as S followed by R), making the 3D rotation group a nonabelian group. Moreover, the rotation group has a natural structure as a manifold for which the group operations are smoothly differentiable, so it is in fact a Lie group. It is compact and has dimension 3. Rotations are linear transformations of and can therefore be represented by matrices once a basis of has been chosen. Specifically, if we choose an orthonormal basis of , every rotation is described by an orthogonal 3 × 3 matrix (i.e., a 3 × 3 matrix with real entries which, when multiplied by its transpose, results in the identity matrix) with determinant 1. The group SO(3) can therefore be identified with the group of these matrices under matrix multiplication. These matrices are known as "special orthogonal matrices", explaining the notation SO(3). The group SO(3) is used to describe the possible rotational symmetries of an object, as well as the possible orientations of an object in space. Its representations are important in physics, where they give rise to the elementary particles of integer spin. Len
https://en.wikipedia.org/wiki/In%20situ%20bioremediation
Bioremediation is the process of decontaminating polluted sites through the usage of either endogenous or external microorganism. In situ is a term utilized within a variety of fields meaning "on site" and refers to the location of an event. Within the context of bioremediation, in situ indicates that the location of the bioremediation has occurred at the site of contamination without the translocation of the polluted materials. Bioremediation is used to neutralize pollutants including Hydrocarbons, chlorinated compounds, nitrates, toxic metals and other pollutants through a variety of chemical mechanisms. Microorganism used in the process of bioremediation can either be implanted or cultivated within the site through the application of fertilizers and other nutrients. Common polluted sites targeted by bioremediation are groundwater/aquifers and polluted soils. Aquatic ecosystems affected by oil spills have also shown improvement through the application of bioremediation. The most notable cases being the Deepwater Horizon oil spill in 2010 and the Exxon Valdez oil spill in 1989. Two variations of bioremediation exist defined by the location where the process occurs. Ex situ bioremediation occurs at a location separate from the contaminated site and involves the translocation of the contaminated material. In situ occurs within the site of contamination In situ bioremediation can further be categorized by the metabolism occurring, aerobic and anaerobic, and by the level of human involvement. History The Sun Oil pipeline spill in Ambler, Pennsylvania spurred the first commercial usage of in situ bioremediation in 1972 to remove hydrocarbons from contaminated sites. A patent was filed in 1974 by Richard Raymond, Reclamation of Hydrocarbon Contaminated Ground Waters, which provided the basis for the commercialization of in situ bioremediation. Classifications of In situ Bioremediation Accelerated Accelerated in situ bioremediation is defined when a specified microo
https://en.wikipedia.org/wiki/Equations%20for%20a%20falling%20body
A set of equations describing the trajectories of objects subject to a constant gravitational force under normal Earth-bound conditions. Assuming constant acceleration g due to Earth’s gravity, Newton's law of universal gravitation simplifies to F = mg, where F is the force exerted on a mass m by the Earth’s gravitational field of strength g. Assuming constant g is reasonable for objects falling to Earth over the relatively short vertical distances of our everyday experience, but is not valid for greater distances involved in calculating more distant effects, such as spacecraft trajectories. History Galileo was the first to demonstrate and then formulate these equations. He used a ramp to study rolling balls, the ramp slowing the acceleration enough to measure the time taken for the ball to roll a known distance. He measured elapsed time with a water clock, using an "extremely accurate balance" to measure the amount of water. The equations ignore air resistance, which has a dramatic effect on objects falling an appreciable distance in air, causing them to quickly approach a terminal velocity. The effect of air resistance varies enormously depending on the size and geometry of the falling object—for example, the equations are hopelessly wrong for a feather, which has a low mass but offers a large resistance to the air. (In the absence of an atmosphere all objects fall at the same rate, as astronaut David Scott demonstrated by dropping a hammer and a feather on the surface of the Moon.) The equations also ignore the rotation of the Earth, failing to describe the Coriolis effect for example. Nevertheless, they are usually accurate enough for dense and compact objects falling over heights not exceeding the tallest man-made structures. Overview Near the surface of the Earth, the acceleration due to gravity  = 9.807 m/s2 (metres per second squared, which might be thought of as "metres per second, per second"; or 32.18 ft/s2 as "feet per second per second") approxima
https://en.wikipedia.org/wiki/Microwave%20oven
A microwave oven or simply microwave is an electric oven that heats and cooks food by exposing it to electromagnetic radiation in the microwave frequency range. This induces polar molecules in the food to rotate and produce thermal energy in a process known as dielectric heating. Microwave ovens heat foods quickly and efficiently because excitation is fairly uniform in the outer of a homogeneous, high-water-content food item. The development of the cavity magnetron in the United Kingdom made possible the production of electromagnetic waves of a small enough wavelength (microwaves). American engineer Percy Spencer is generally credited with inventing the modern microwave oven after World War II from radar technology developed during the war. Named the "Radarange", it was first sold in 1946. Raytheon later licensed its patents for a home-use microwave oven that was introduced by Tappan in 1955, but it was still too large and expensive for general home use. Sharp Corporation introduced the first microwave oven with a turntable between 1964 and 1966. The countertop microwave oven was introduced in 1967 by the Amana Corporation. After microwave ovens became affordable for residential use in the late 1970s, their use spread into commercial and residential kitchens around the world, and prices fell rapidly during the 1980s. In addition to cooking food, microwave ovens are used for heating in many industrial processes. Microwave ovens are a common kitchen appliance and are popular for reheating previously cooked foods and cooking a variety of foods. They rapidly heat foods which can easily burn or turn lumpy if cooked in conventional pans, such as hot butter, fats, chocolate or porridge. Microwave ovens usually do not directly brown or caramelize food, since they rarely attain the necessary temperature to produce Maillard reactions. Exceptions occur in cases where the oven is used to heat frying-oil and other oily items (such as bacon), which attain far higher tempera
https://en.wikipedia.org/wiki/Ricoh%202A03
The Ricoh 2A03 or RP2A03 (NTSC version) / Ricoh 2A07 or RP2A07 (PAL version) is an 8-bit microprocessor manufactured by Ricoh for the Nintendo Entertainment System video game console. It was also used as a sound chip and secondary CPU by Nintendo's arcade games Punch-Out!! and Donkey Kong 3. Technical details The Ricoh 2A03 contains a second-sourced MOS Technology 6502 core, modified to disable the 6502's binary-coded decimal mode (possibly to avoid a MOS Technology patent). It also integrates a programmable sound generator (also known as APU, featuring twenty two memory-mapped I/O registers), rudimentary DMA, and game controller polling. Sound hardware The Ricoh 2A03's sound hardware has 5 channels, separated into two APUs (Audio Processing Units). The first APU contains two general purpose pulse channels with 4 duty cycles, and the second APU contains a triangle wave generator, an LFSR-based Noise generator, and a 1-bit Delta modulation-encoded PCM (DPCM) channel. While a majority of the NES library uses only 4 channels, later games use the 5th DPCM channel due to cartridge memory expansions becoming cheaper. For example, Super Mario Bros. 3 uses the DPCM channel for simple drum sounds, while Journey to Silius uses it for sampled basslines. An interesting quirk of the DPCM channel is that the bit order is reversed compared to what is normally expected for 1-bit PCM. Many developers were unaware of this detail, causing samples to be distorted during playback. The output of each channel is mixed non-linearly in their respective APU before being combined. On Famicom and Dendy systems, expansion sound chips may add their own sound to the output via a pin on the game cartridge. Expansion audio capabilities were removed from international NES systems, but can be restored by modifying the expansion port located on the bottom of the system. Regional variations PAL versions of the NES (sold in Europe, Asia, and Australia) use the Ricoh 2A07 or RP2A07 processor, whic
https://en.wikipedia.org/wiki/Post-quantum%20cryptography
In cryptography, post-quantum cryptography (PQC) (sometimes referred to as quantum-proof, quantum-safe or quantum-resistant) refers to cryptographic algorithms (usually public-key algorithms) that are thought to be secure against a cryptanalytic attack by a quantum computer. The problem with currently popular algorithms is that their security relies on one of three hard mathematical problems: the integer factorization problem, the discrete logarithm problem or the elliptic-curve discrete logarithm problem. All of these problems could be easily solved on a sufficiently powerful quantum computer running Shor's algorithm or even faster and less demanding (in terms of number of qubits required) alternatives. Even though current quantum computers lack processing power to break any real cryptographic algorithm, many cryptographers are designing new algorithms to prepare for a time when quantum computing becomes a threat (which is a potential day in the future called Q-Day). This work has gained greater attention from academics and industry through the PQCrypto conference series since 2006 and more recently by several workshops on Quantum Safe Cryptography hosted by the European Telecommunications Standards Institute (ETSI) and the Institute for Quantum Computing. The rumoured existence of widespread harvest now, decrypt later programs has also been seen as a motivation for the early introduction of post-quantum algorithms, as data recorded now may still remain sensitive many years into the future. In contrast to the threat quantum computing poses to current public-key algorithms, most current symmetric cryptographic algorithms and hash functions are considered to be relatively secure against attacks by quantum computers. While the quantum Grover's algorithm does speed up attacks against symmetric ciphers, doubling the key size can effectively block these attacks. Thus post-quantum symmetric cryptography does not need to differ significantly from current symmetric crypt
https://en.wikipedia.org/wiki/Digital%20video%20effect
Digital video effects (DVEs) are visual effects that provide comprehensive live video image manipulation, in the same form as optical printer effects in film. DVEs differ from standard video switcher effects (often referred to as analog effects) such as wipes or dissolves, in that they deal primarily with resizing, distortion or movement of the image. Modern video switchers often contain internal DVE functionality. Modern DVE devices are incorporated in high-end broadcast video switchers. Early examples of DVE devices found in the broadcast post-production industry include the Ampex Digital Optics (ADO), Quantel DPE-5000, Vital Squeezoom, NEC E-Flex and Abekas A-51. By 1988, Grass Valley Group caught up with the competition with their Kaleidoscope, which integrated ADO-type effects with their widely-used line of broadcast switching gear. DVEs are used by the broadcast television industry in live television production environments like television studios and outside broadcasts. They are commonly used in video post-production. See also Character generator Digital audio workstation Quantel Mirage, a DVE advanced for its time
https://en.wikipedia.org/wiki/Map%20%28higher-order%20function%29
In many programming languages, map is the name of a higher-order function that applies a given function to each element of a collection, e.g. a list or set, returning the results in a collection of the same type. It is often called apply-to-all when considered in functional form. The concept of a map is not limited to lists: it works for sequential containers, tree-like containers, or even abstract containers such as futures and promises. Examples: mapping a list Suppose we have a list of integers [1, 2, 3, 4, 5] and would like to calculate the square of each integer. To do this, we first define a function to square a single number (shown here in Haskell): square x = x * x Afterwards we may call >>> map square [1, 2, 3, 4, 5] which yields [1, 4, 9, 16, 25], demonstrating that map has gone through the entire list and applied the function square to each element. Visual example Below, you can see a view of each step of the mapping process for a list of integers X = [0, 5, 8, 3, 2, 1] that we want to map into a new list X' according to the function : The map is provided as part of the Haskell's base prelude (i.e. "standard library") and is implemented as: map :: (a -> b) -> [a] -> [b] map _ [] = [] map f (x : xs) = f x : map f xs Generalization In Haskell, the polymorphic function map :: (a -> b) -> [a] -> [b] is generalized to a polytypic function fmap :: Functor f => (a -> b) -> f a -> f b, which applies to any type belonging the Functor type class. The type constructor of lists [] can be defined as an instance of the Functor type class using the map function from the previous example: instance Functor [] where fmap = map Other examples of Functor instances include trees: -- a simple binary tree data Tree a = Leaf a | Fork (Tree a) (Tree a) instance Functor Tree where fmap f (Leaf x) = Leaf (f x) fmap f (Fork l r) = Fork (fmap f l) (fmap f r) Mapping over a tree yields: >>> fmap square (Fork (Fork (Leaf 1) (Leaf 2)) (Fork (Leaf 3)
https://en.wikipedia.org/wiki/SLIP%20%28programming%20language%29
SLIP is a list processing computer programming language, invented by Joseph Weizenbaum in the 1960s. The name SLIP stands for Symmetric LIst Processor. It was first implemented as an extension to the Fortran programming language, and later embedded into MAD and ALGOL. The best known program written in the language is ELIZA, an early natural language processing computer program created by Weizenbaum at the MIT Artificial Intelligence Laboratory. General overview In a nutshell, SLIP consisted of a set of FORTRAN "accessor" functions which operated on circular doubly linked lists with fixed-size data fields. The "accessor" functions had direct and indirect addressing variants. List representation The list representation had four types of cell: a reader, a header, a sublist indicator, and a payload cell. The header included a reference count field for garbage collection purposes. The sublist indicator allowed it to be able to represent nested lists, such as (A, B, C, (1, 2, 3), D, E, F) where (1, 2, 3) is a sublist indicated by a cell in the '*' position in the list (A, B, C, *, D, E, F). The reader was essentially a state history stack—a good example of a memento pattern—where each cell pointed to the header of the list being read, the current position within the list being read, and the level or depth of the history stack.
https://en.wikipedia.org/wiki/Polynomial%20interpolation
In numerical analysis, polynomial interpolation is the interpolation of a given bivariate data set by the polynomial of lowest possible degree that passes through the points of the dataset. Given a set of data points , with no two the same, a polynomial function is said to interpolate the data if for each . There is always a unique such polynomial, commonly given by two explicit formulas, the Lagrange polynomials and Newton polynomials. Applications The original use of interpolation polynomials was to approximate values of important transcendental functions such as natural logarithm and trigonometric functions. Starting with a few accurately computed data points, the corresponding interpolation polynomial will approximate the function at an arbitrary nearby point. Polynomial interpolation also forms the basis for algorithms in numerical quadrature (Simpson's rule) and numerical ordinary differential equations (multigrid methods). In computer graphics, polynomials can be used to approximate complicated plane curves given a few specified points, for example the shapes of letters in typography. This is usually done with Bézier curves, which are a simple generalization of interpolation polynomials (having specified tangents as well as specified points). In numerical analysis, polynomial interpolation is essential to perform sub-quadratic multiplication and squaring, such as Karatsuba multiplication and Toom–Cook multiplication, where interpolation through points on a product polynomial yields the specific product required. For example, given a = f(x) = a0x0 + a1x1 + ··· and b = g(x) = b0x0 + b1x1 + ···, the product ab is a specific value of W(x) = f(x)g(x). One may easily find points along W(x) at small values of x, and interpolation based on those points will yield the terms of W(x) and the specific product ab. As fomulated in Karatsuba multiplication, this technique is substantially faster than quadratic multiplication, even for modest-sized inputs, especi
https://en.wikipedia.org/wiki/Strengthen%20the%20Arm%20of%20Liberty
Strengthen the Arm of Liberty is the theme of the Boy Scouts of America's fortieth anniversary celebration in 1950. The campaign was inaugurated in February with a dramatic ceremony held at the base of the Statue of Liberty (Liberty Enlightening the World). Approximately 200 BSA Statue of Liberty replicas were installed across the United States. Replicas As part of the Strengthening the Arm of Liberty campaign to commemorate the 40th anniversary of the Boy Scouts of America (BSA), hundreds of scale replicas of the Statue of Liberty have been created nationwide. The Statue of Liberty, by French sculptor Frédéric Auguste Bartholdi, bears the classical appearance of the Roman stola, sandals, and facial expression which are derived from Libertas, ancient Rome's goddess of freedom from slavery, oppression, and tyranny. Her raised right foot is on the move. This symbol of Liberty and Freedom is not standing still or at attention in the harbor, but moving forward, as her left foot tramples broken shackles at her feet, in symbolism of the United States's wish to be free from oppression and tyranny. Manufacture Between 1949 and 1952, approximately two hundred replicas of the statue, made of stamped copper, were purchased by Boy Scout troops and donated in 39 states in the U.S. and several of its possessions and territories. The project was the brainchild of Kansas City businessman, J.P. Whitaker, who was then Scout Commissioner of the Kansas City Area Council. The copper statues were manufactured by Friedley-Voshardt Co. (Chicago, Illinois) and purchased through the Kansas City Boy Scout office. The statues are approximately tall without the base, constructed of sheet copper, weigh , and originally cost plus freight. The mass-produced statues are not meticulously accurate and a conservator noted that "her face isn't as mature as the real Liberty. It's rounder and more like a little girl's." Present Many of these statues have been lost or destroyed, but preservationis
https://en.wikipedia.org/wiki/Index%20of%20physics%20articles%20%28A%29
The index of physics articles is split into multiple pages due to its size. To navigate by individual letter use the table of contents below. A A. Baha Balantekin A. Carl Helmholz A. Catrina Bryce A. E. Becquerel A. G. Doroshkevich A. I. Shlyakhter A. K. Jonscher A. P. Balachandran A15 phases AAS 215th meeting ABINIT ACE (CERN) ACS Nano AD1 experiment AD2 experiment AD3 experiment AD4 experiment AD5 experiment AD6 experiment ADA collider ADHM construction ADITYA (tokamak) ADM energy ADM formalism ADONE Advanced Simulation Library AEGIS (particle physics) AIDA (computing) AILU AIP Conference Proceedings AKLT Model ALBA (synchrotron) ALEPH experiment ALICE (accelerator) ALPHA Collaboration AMBER AMOLF ANNNI model ANTARES (accelerator) ANTARES (telescope) APEXC ARC-ECRIS ARGUS (experiment) ARGUS distribution ARROW waveguide ASACUSA ASA Gold Medal ASA Silver Medal ASDEX Upgrade ASTM Subcommittee E20.02 on Radiation Thermometry ASTRA (reactor) ASTRID 2 ATHENA ATLAS Collaboration ATLAS experiment ATOMKI ATRAP AUSM A Brief History of Time A Briefer History of Time (Hawking and Mlodinow book) A Different Universe A Dynamical Theory of the Electromagnetic Field A Large Ion Collider Experiment A New Theory of Magnetic Storms A Treatise on Electricity and Magnetism A Universe from Nothing A dynamical theory of the electromagnetic field Aage Bohr Aaldert Wapstra Aaron Klug Aaron Lemonick Abbe prism Abbe sine condition Abdel-Moniem El-Ganayni Abdul Qadeer Khan Abdul Rasul (Iraqi scientist) Abdullah Sadiq Abdus Salam Abeles matrix formalism Abell 520 Aberdeen Tunnel Underground Laboratory Aberration of light Abhay Ashtekar Ablation Abner Shimony Abol-fath Khazeni About Time (book) Above threshold ionization Abraham (Avi) Loeb Abraham Alikhanov Abraham Bennet Abraham Esau Abraham Haskel Taub Abraham Katzir Abraham Pais Abraham Zelmanov Abraham–Lorentz force Abraham–Lorentz–Dirac force Abraham–Minkowski controversy Abram Ioffe Abrikosov vortex Absolute angular momentum Absolute
https://en.wikipedia.org/wiki/Knudsen%20cell
In crystal growth, a Knudsen cell is an effusion evaporator source for relatively low partial pressure elementary sources (e.g. Ga, Al, Hg, As). Because it is easy to control the temperature of the evaporating material in Knudsen cells, they are commonly used in molecular-beam epitaxy. Development The Knudsen effusion cell was developed by Martin Knudsen (1871–1949). A typical Knudsen cell contains a crucible (made of pyrolytic boron nitride, quartz, tungsten or graphite), heating filaments (often made of metal tantalum), water cooling system, heat shields, and an orifice shutter. Vapor pressure measurement The Knudsen cell is used to measure the vapor pressures of a solid with very low vapor pressure. Such a solid forms a vapor at low pressure by sublimation. The vapor slowly effuses through the pinhole, and the loss of mass is proportional to the vapor pressure and can be used to determine this pressure. The heat of sublimation can also be determined by measuring the vapor pressure as a function of temperature, using the Clausius–Clapeyron relation.