source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/304%20%28number%29
304 is the natural number following 303 and preceding 305. In mathematics 304 is an even composite number with two prime factor. 304 is the sum of consecutive primes in two different ways. It is the sum of 41+43+47+53+59+61 and of 23+29+31+37+41+43+47+53.3 304 is a primitive semiperfect number meaning that it is a semiperfect number that is not divisible by any other semiperfect number. 304 is an untouchable number meaning that it is not equal to the sum of any number’s proper divisors. 304 is a nontotient number meaning that it is an even number where phi(x) cannot result in that number. World Records In 2021, Tom Brady completed his 304th touchdown during his football career in Gillette Stadium. This is the most by one quarterback in one stadium. On October 20, 2018, the Guinness World Record for the greatest number of participants in a Beetle drive game was achieved. There were 304 participants. On August 25, 2014, David Chapple attended the most performances in the Edinburgh Fringe Festival. There were 304 shows with David attending around 11 shows each day. On December 31, 2021, the greatest number of people passing a squash ball over a video chain was achieved by SquashSmarts. Other fields The calendar years 304 AD and 304 BC. 289 is the number for several highways across the countries of Brazil, Canada, China, Costa Rica, Hungary, Japan, the Thailand and the United States. 304 Olga is an asteroid in the asteroid belt. It was discovered by Johann Palisa in Vienna.
https://en.wikipedia.org/wiki/The%20Research%20and%20Analysis%20Center
The Research and Analysis Center (TRAC), formerly the TRADOC Analysis Center, is an analysis agency of the United States Army. TRAC conducts research on potential military operations worldwide to inform decisions about the most challenging issues facing the Army and the Department of Defense (DoD). TRAC relies upon the intellectual capital of a highly skilled workforce of military and civilian personnel to execute its mission. TRAC conducts operations research (OR) on a wide range of military topics, some contemporary but most often set five to 15 years in the future. TRAC directly supports the mission of the Army Futures Command (AFC), to develop future concepts and requirements while also serving the decision needs of many military clients. Mission statement To produce relevant and credible operations analysis to inform decisions. Organization TRAC is led by a civilian SES director, subordinate to the Commanding General of the US Army Futures Command. It comprises four centers: TRAC-Fort Leavenworth (TRAC-FLVN), led by a civilian SES director, is co-located with TRAC headquarters at Fort Leavenworth, Kansas and has traditionally conducted analysis at the operational (Corps and division) level. TRAC-White Sands Missile Range (TRAC-WSMR), led by a civilian SES director, is located at White Sands Missile Range in New Mexico and has traditionally conducted analysis at the tactical Brigade and below level. TRAC-LEE, led by a led by a lieutenant colonel, is co-located with the Combined Arms Support Command (CASCOM) located at Fort Lee, VA and has traditionally conducted analysis in the area of Sustainment, which includes Logistics and other support functions such as medical and personnel. TRAC-Monterey, also led by a lieutenant colonel, is co-located with the Naval Postgraduate School (NPS) in Monterey, CA and has traditionally utilized the resources of NPS to conduct research into new models and methodologies. Each center director is subordinate to the TRA
https://en.wikipedia.org/wiki/Bing%27s%20recognition%20theorem
In topology, a branch of mathematics, Bing's recognition theorem, named for R. H. Bing, asserts that a necessary and sufficient condition for a 3-manifold M to be homeomorphic to the 3-sphere is that every Jordan curve in M be contained within a topological ball.
https://en.wikipedia.org/wiki/Agda%20%28programming%20language%29
Agda is a dependently typed functional programming language originally developed by Ulf Norell at Chalmers University of Technology with implementation described in his PhD thesis. The original Agda system was developed at Chalmers by Catarina Coquand in 1999. The current version, originally known as Agda 2, is a full rewrite, which should be considered a new language that shares a name and tradition. Agda is also a proof assistant based on the propositions-as-types paradigm, but unlike Coq, has no separate tactics language, and proofs are written in a functional programming style. The language has ordinary programming constructs such as data types, pattern matching, records, let expressions and modules, and a Haskell-like syntax. The system has Emacs, Atom, and VS Code interfaces but can also be run in batch mode from the command line. Agda is based on Zhaohui Luo's unified theory of dependent types (UTT), a type theory similar to Martin-Löf type theory. Agda is named after the Swedish song "Hönan Agda", written by Cornelis Vreeswijk, which is about a hen named Agda. This alludes to the name of the theorem prover Coq, which was named after Thierry Coquand, Catarina Coquand's husband. Features Inductive types The main way of defining data types in Agda is via inductive data types which are similar to algebraic data types in non-dependently typed programming languages. Here is a definition of Peano numbers in Agda: data ℕ : Set where zero : ℕ suc : ℕ → ℕ Basically, it means that there are two ways to construct a value of type , representing a natural number. To begin, zero is a natural number, and if n is a natural number, then suc n, standing for the successor of n, is a natural number too. Here is a definition of the "less than or equal" relation between two natural numbers: data _≤_ : ℕ → ℕ → Set where z≤n : {n : ℕ} → zero ≤ n s≤s : {n m : ℕ} → n ≤ m → suc n ≤ suc m The first constructor, z≤n, corresponds to the axiom that zero is less than o
https://en.wikipedia.org/wiki/Hash%20oil
Hash oil or cannabis oil is an oleoresin obtained by the extraction of cannabis or hashish. It is a cannabis concentrate containing many of its resins and terpenes – in particular, tetrahydrocannabinol (THC), cannabidiol (CBD), and other cannabinoids. Hash oil is usually consumed by smoking, vaporizing or eating. Preparations of hash oil may be solid or colloidal depending on both production method and temperature and are usually identified by their appearance or characteristics. Color most commonly ranges from transparent golden or light brown, to tan or black. There are various extraction methods, most involving a solvent, such as butane or ethanol. Hash oil is an extracted cannabis product that may use any part of the plant, with minimal or no residual solvent. It is generally thought to be indistinct from traditional hashish, according to the 1961 UN Single Convention on Narcotic Drugs (Schedule I and IV), as it is "the separated resin, whether crude or purified, obtained from the cannabis plant". Hash oil may be sold in cartridges used with pen vaporizers. Cannabis retailers in California have reported about 40% of their sales are from cannabis oils. Composition The tetrahydrocannabinol (THC) content of hash oil varies tremendously, since the manufacturers use a varying assortment of marijuana plants and preparation techniques. Dealers sometimes cut hash oils with other oils. The form of the extract varies depending on the extraction process used; it may be liquid, a clear amber solid (called “shatter"), a sticky semisolid substance (called "wax"), or a brittle honeycombed solid (called "honeycomb wax"). Hash oil seized in the 1970s had a THC content ranging from 10% to 30%. The oil available on the U.S. West Coast in 1974 averaged about 15% THC. Samples seized across the United States by the Drug Enforcement Administration over an 18-year period (1980–1997) showed that THC content in hashish and hashish oil averaging 12.9% and 17.4%, respectively, did not
https://en.wikipedia.org/wiki/Comparison%20of%20topologies
In topology and related areas of mathematics, the set of all possible topologies on a given set forms a partially ordered set. This order relation can be used for comparison of the topologies. Definition A topology on a set may be defined as the collection of subsets which are considered to be "open". An alternative definition is that it is the collection of subsets which are considered "closed". These two ways of defining the topology are essentially equivalent because the complement of an open set is closed and vice versa. In the following, it doesn't matter which definition is used. Let τ1 and τ2 be two topologies on a set X such that τ1 is contained in τ2: . That is, every element of τ1 is also an element of τ2. Then the topology τ1 is said to be a coarser (weaker or smaller) topology than τ2, and τ2 is said to be a finer (stronger or larger) topology than τ1. If additionally we say τ1 is strictly coarser than τ2 and τ2 is strictly finer than τ1. The binary relation ⊆ defines a partial ordering relation on the set of all possible topologies on X. Examples The finest topology on X is the discrete topology; this topology makes all subsets open. The coarsest topology on X is the trivial topology; this topology only admits the empty set and the whole space as open sets. In function spaces and spaces of measures there are often a number of possible topologies. See topologies on the set of operators on a Hilbert space for some intricate relationships. All possible polar topologies on a dual pair are finer than the weak topology and coarser than the strong topology. The complex vector space Cn may be equipped with either its usual (Euclidean) topology, or its Zariski topology. In the latter, a subset V of Cn is closed if and only if it consists of all solutions to some system of polynomial equations. Since any such V also is a closed set in the ordinary sense, but not vice versa, the Zariski topology is strictly weaker than the ordinary one. Properties
https://en.wikipedia.org/wiki/SharpOS
SharpOS is a discontinued computer operating system based on the .NET Framework and related programming language C#. It was developed by a group of volunteers and presided over by a team of six project administrators: Mircea-Cristian Racasan, Bruce Markham, Johann MacDonagh, Sander van Rossen, Jae Hyun, and William Lahti. It is no longer in active development, and resources have been moved to the MOSA project. As of 2017, SharpOS is one of three C#-based operating systems released under a free and open-source software license. SharpOS has only one public version available. and a basic command-line interface. History SharpOS began in November 2006 as a public discussion on the Mono development mailing list as a thread named Operating System in C#. After attracting many participants, Michael Schurter created the SharpOS.org wiki and mailing list to continue the discussion at a more relevant location. Soon after, the core developers (Bruce Markham, William Lahti, Sander van Rossen, and Mircea-Cristian Racasan) decided that they would design their own ahead-of-time (AOT) compiler to allow the operating system to run its boot sequence without using another programming language. Once the AOT compiler was developed enough, the team then began to code the kernel. This was met with long periods of inactivity and few active developers due to lack of interest in unsafe kernel programming. On 1 January 2008, the SharpOS team made their first milestone release public, this is the first version of the software to appear in the SharpOS SourceForge package repository available for general public use. Notes See also Singularity (operating system) Cosmos (operating system) External links Interview with SharpOS team sharpos-developers mailing list Operating system kernels Free software operating systems Beta software Hobbyist operating systems Microkernel-based operating systems Microkernels Discontinued operating systems
https://en.wikipedia.org/wiki/Section%20%28category%20theory%29
In category theory, a branch of mathematics, a section is a right inverse of some morphism. Dually, a retraction is a left inverse of some morphism. In other words, if and are morphisms whose composition is the identity morphism on , then is a section of , and is a retraction of . Every section is a monomorphism (every morphism with a left inverse is left-cancellative), and every retraction is an epimorphism (every morphism with a right inverse is right-cancellative). In algebra, sections are also called split monomorphisms and retractions are also called split epimorphisms. In an abelian category, if is a split epimorphism with split monomorphism , then is isomorphic to the direct sum of and the kernel of . The synonym coretraction for section is sometimes seen in the literature, although rarely in recent work. Properties A section that is also an epimorphism is an isomorphism. Dually a retraction that is also a monomorphism is an isomorphism. Terminology The concept of a retraction in category theory comes from the essentially similar notion of a retraction in topology: where is a subspace of is a retraction in the topological sense, if it's a retraction of the inclusion map in the category theory sense. The concept in topology was defined by Karol Borsuk in 1931. Borsuk's student, Samuel Eilenberg, was with Saunders Mac Lane the founder of category theory, and (as the earliest publications on category theory concerned various topological spaces) one might have expected this term to have initially be used. In fact, their earlier publications, up to, e.g., Mac Lane (1963)'s Homology, used the term right inverse. It was not until 1965 when Eilenberg and John Coleman Moore coined the dual term 'coretraction' that Borsuk's term was lifted to category theory in general. The term coretraction gave way to the term section by the end of the 1960s. Both use of left/right inverse and section/retraction are commonly seen in the literature: the former use
https://en.wikipedia.org/wiki/Giatec%20Scientific
Giatec Scientific Inc. is a Canadian-based company with headquarters located in Ottawa, Ontario. It is a developer and manufacturer of nondestructive testing quality control and condition assessment devices for the construction industry. History Giatec Scientific Inc. was co-founded by Pouria Ghods of Carleton University and Aali R. Alizadeh of the University of Ottawa in September 2010. The pair began working with advisers at Invest Ottawa, who arranged sources, funding and ideas to bring Giatec's products to the market. The company's first product was a sensor to detect corrosion speed in the rebar/steel inside concrete. Unlike other non-destructive testing methods available at the time, Giatec used mobile-based applications software and smart technology to collect and analyze data. In 2012, Giatec became independent of Invest Ottawa. That year, after the collapse of the Algo Centre Mall in Elliot Lake in 2012, Giatec's equipment was used in the forensic structural examination that was initiated as part of the public inquiry. Giatec has developed a variety of testing devices and sensors for measurement of concrete permeability, electrical resistivity measurement of concrete, half-cell corrosion, corrosion rate, concrete temperature, and concrete maturity. In 2014, Giatec won the Rio Info 2013 Innovation Award, and in 2014, the company was included in the Ottawa Business Journal's annual list of "Startups to Watch". Giatec is also the recipient of Ottawa's Top 10 Fastest Growing Companies, and Canada's Top 500 Fastest Growing Companies in 2018. Giatec also began to develop Internet of Things (IoT) applications for the construction industry through wireless concrete temperature and maturity sensors. In March 2015, the company released a new electrical resistivity monitoring device that sends data directly to a smartphone through a downloadable application. In October 2016, Giatec released Smart Concrete, a new IoT-based solution for ready-mix concrete producer
https://en.wikipedia.org/wiki/Cognitive%20miser
In psychology, the human mind is considered to be a cognitive miser due to the tendency of humans to think and solve problems in simpler and less effortful ways rather than in more sophisticated and effortful ways, regardless of intelligence. Just as a miser seeks to avoid spending money, the human mind often seeks to avoid spending cognitive effort. The cognitive miser theory is an umbrella theory of cognition that brings together previous research on heuristics and attributional biases to explain when and why people are cognitive misers. The term cognitive miser was first introduced by Susan Fiske and Shelley Taylor in 1984. It is an important concept in social cognition theory and has been influential in other social sciences such as economics and political science. Assumption The metaphor of the cognitive miser assumes that the human mind is limited in time, knowledge, attention, and cognitive resources. Usually people do not think rationally or cautiously, but use cognitive shortcuts to make inferences and form judgments. These shortcuts include the use of schemas, scripts, stereotypes, and other simplified perceptual strategies instead of careful thinking. For example, people tend to make correspondent reasoning and are likely to believe that behaviors should be correlated to or representative of stable characteristics. Background The naïve scientist and attribution theory Before Fiske and Taylor's cognitive miser theory, the predominant model of social cognition was the naïve scientist. First proposed in 1958 by Fritz Heider in The Psychology of Interpersonal Relations, this theory holds that humans think and act with dispassionate rationality whilst engaging in detailed and nuanced thought processes for both complex and routine actions. In this way, humans were thought to think like scientists, albeit naïve ones, measuring and analyzing the world around them. Applying this framework to human thought processes, naïve scientists seek the consistency and
https://en.wikipedia.org/wiki/Breadth-first%20search
Breadth-first search (BFS) is an algorithm for searching a tree data structure for a node that satisfies a given property. It starts at the tree root and explores all nodes at the present depth prior to moving on to the nodes at the next depth level. Extra memory, usually a queue, is needed to keep track of the child nodes that were encountered but not yet explored. For example, in a chess endgame, a chess engine may build the game tree from the current position by applying all possible moves and use breadth-first search to find a win position for White. Implicit trees (such as game trees or other problem-solving trees) may be of infinite size; breadth-first search is guaranteed to find a solution node if one exists. In contrast, (plain) depth-first search, which explores the node branch as far as possible before backtracking and expanding other nodes, may get lost in an infinite branch and never make it to the solution node. Iterative deepening depth-first search avoids the latter drawback at the price of exploring the tree's top parts over and over again. On the other hand, both depth-first algorithms get along without extra memory. Breadth-first search can be generalized to both undirected graphs and directed graphs with a given start node (sometimes referred to as a 'search key'). In state space search in artificial intelligence, repeated searches of vertices are often allowed, while in theoretical analysis of algorithms based on breadth-first search, precautions are typically taken to prevent repetitions. BFS and its application in finding connected components of graphs were invented in 1945 by Konrad Zuse, in his (rejected) Ph.D. thesis on the Plankalkül programming language, but this was not published until 1972. It was reinvented in 1959 by Edward F. Moore, who used it to find the shortest path out of a maze, and later developed by C. Y. Lee into a wire routing algorithm (published in 1961). Pseudocode Input: A graph and a starting vertex of Outp
https://en.wikipedia.org/wiki/Sergei%20Fomin
Sergei Vasilyevich Fomin (; 9 December 1917 – 17 August 1975) was a Soviet mathematician who was co-author with Andrey Kolmogorov of Introductory real analysis, and co-author with Israel Gelfand of Calculus of Variations (1963), both books that are widely read in Russian and in English. Fomin entered Moscow State University at the age of 16. His first paper was published at 19 on infinite abelian groups. After his graduation he worked with Kolmogorov. He was drafted during World War II, after which he returned to Moscow. When the war ended Fomin returned to Moscow State University and joined Andrey Tikhonov's department. In 1951 he was awarded his habilitation for a dissertation on dynamical systems with invariant measure. Two years later he was appointed a professor. Later in life, he became involved with mathematical aspects of biology. The American mathematician Paul Halmos wrote the following about Fomin: Some of the mathematical interests of Sergei Vasilovich were always close to some of mine (measure and ergodic theory); he supervised the translation of a couple of my books into Russian. We had corresponded before we met, and it was a pleasure to shake hands with a man instead of reading a letter. Three or four years later he came to visit me in Hawaii, and it was a pleasure to see him enjoy, in contrast to Moscow, the warm sunshine. Fomin died in Vladivostok.
https://en.wikipedia.org/wiki/Bioorthogonal%20chemistry
The term bioorthogonal chemistry refers to any chemical reaction that can occur inside of living systems without interfering with native biochemical processes. The term was coined by Carolyn R. Bertozzi in 2003. Since its introduction, the concept of the bioorthogonal reaction has enabled the study of biomolecules such as glycans, proteins, and lipids in real time in living systems without cellular toxicity. A number of chemical ligation strategies have been developed that fulfill the requirements of bioorthogonality, including the 1,3-dipolar cycloaddition between azides and cyclooctynes (also termed copper-free click chemistry), between nitrones and cyclooctynes, oxime/hydrazone formation from aldehydes and ketones, the tetrazine ligation, the isocyanide-based click reaction, and most recently, the quadricyclane ligation. The use of bioorthogonal chemistry typically proceeds in two steps. First, a cellular substrate is modified with a bioorthogonal functional group (chemical reporter) and introduced to the cell; substrates include metabolites, enzyme inhibitors, etc. The chemical reporter must not alter the structure of the substrate dramatically to avoid affecting its bioactivity. Secondly, a probe containing the complementary functional group is introduced to react and label the substrate. Although effective bioorthogonal reactions such as copper-free click chemistry have been developed, development of new reactions continues to generate orthogonal methods for labeling to allow multiple methods of labeling to be used in the same biosystems. Bertozzi was awarded the Nobel Prize in Chemistry in 2022 for her development of click chemistry and bioorthogonal chemistry. Etymology The word bioorthogonal comes from Greek bio- "living" and orthogōnios "right-angled". Thus literally a reaction that goes perpendicular to a living system, thus not disturbing it. Requirements for bioorthogonality To be considered bioorthogonal, a reaction must fulfill a number of require
https://en.wikipedia.org/wiki/POP%20before%20SMTP
POP before SMTP or SMTP after POP is a method of authentication used by mail server software which helps allow users the option to send e-mail from any location, as long as they can demonstrably also fetch their mail from the same place. The POP before SMTP approach has been superseded by SMTP Authentication. Technically, users are allowed to use SMTP from an IP address as long as they have previously made a successful login into the POP service at the same mail hosting provider, from the same address, within a predefined timeout period. The main advantage of this process is that it was generally transparent to the average user who will be connecting with an email client, which almost always attempted to fetch new mail before sending new mail. The disadvantages include a potentially complex setup for the mail hosting provider (requiring some sort of communication channel between the POP service and the SMTP service) and uncertainty as to how much time users will take to connect via SMTP (to send mail) after connecting to POP. Those users not handled by this method need to resort to other authorization methods. Also, in cases where users come from externally controlled dynamically assigned addresses, the SMTP server must be careful about not giving too much leeway when allowing unauthorized connections, because of a possibility of race conditions leaving an open mail relay unintentionally exposed. See also Simple Mail Transfer Protocol SMTP AUTH, specified in Mail submission protocol, specified in Email authentication Computer access control protocols
https://en.wikipedia.org/wiki/Crypt%20%28anatomy%29
Crypts are anatomical structures that are narrow but deep invaginations into a larger structure. One common type of anatomical crypt is the Crypts of Lieberkühn. However, it is not the only type: some types of tonsils also have crypts. Because these crypts allow external access to the deep portions of the tonsils, these tonsils are more vulnerable to infection.
https://en.wikipedia.org/wiki/Georges%20Glaeser
Georges Glaeser (8 November 1918 – 1 September 2002) was a French mathematician who was director of the IREM of Strasbourg. He worked in analysis and mathematical education and introduced Glaeser's composition theorem and Glaeser's continuity theorem. Glaeser was a Ph.D. student of Laurent Schwartz. On 3 July 1973, Glaeser filed a complaint against Vichy collaborator Paul Touvier in the Lyon Court, charging him with crimes against humanity. Glaeser accused Touvier of the 1944 massacre at Rillieux-la-Pape, in which Glaeser's father was murdered. Touvier was eventually imprisoned for life on this charge in 1994. Affiliations IAS School of Mathematics (9/1961 – 5/1962) Education University of Nancy (Class of 1957) Selected publications "Etude de quelques algebres tayloriennes" "Racine carrée d'une fonction différentiable", Annales de l'Institut Fourier 13, no. 2 (1963), 203–210 "Une introduction à la didactique expérimentale des mathématiques"
https://en.wikipedia.org/wiki/Loop%20space
In topology, a branch of mathematics, the loop space ΩX of a pointed topological space X is the space of (based) loops in X, i.e. continuous pointed maps from the pointed circle S1 to X, equipped with the compact-open topology. Two loops can be multiplied by concatenation. With this operation, the loop space is an A∞-space. That is, the multiplication is homotopy-coherently associative. The set of path components of ΩX, i.e. the set of based-homotopy equivalence classes of based loops in X, is a group, the fundamental group π1(X). The iterated loop spaces of X are formed by applying Ω a number of times. There is an analogous construction for topological spaces without basepoint. The free loop space of a topological space X is the space of maps from the circle S1 to X with the compact-open topology. The free loop space of X is often denoted by . As a functor, the free loop space construction is right adjoint to cartesian product with the circle, while the loop space construction is right adjoint to the reduced suspension. This adjunction accounts for much of the importance of loop spaces in stable homotopy theory. (A related phenomenon in computer science is currying, where the cartesian product is adjoint to the hom functor.) Informally this is referred to as Eckmann–Hilton duality. Eckmann–Hilton duality The loop space is dual to the suspension of the same space; this duality is sometimes called Eckmann–Hilton duality. The basic observation is that where is the set of homotopy classes of maps , and is the suspension of A, and denotes the natural homeomorphism. This homeomorphism is essentially that of currying, modulo the quotients needed to convert the products to reduced products. In general, does not have a group structure for arbitrary spaces and . However, it can be shown that and do have natural group structures when and are pointed, and the aforementioned isomorphism is of those groups. Thus, setting (the sphere) gives the relationship
https://en.wikipedia.org/wiki/Eric%20Charles%20Milner
Eric Charles Milner, FRSC (May 17, 1928 – July 20, 1997) was a mathematician who worked mainly in combinatorial set theory. Biography Born into a South East London working-class family, Milner was sent to a Reading boarding school for the war but, hating it, ran away and roamed the streets of London. Eventually, another school was found for him; Milner attended King's College London starting in 1946, where he competed as a featherweight boxer. He graduated in 1949 as the best mathematics student in his year, and received a master's degree in 1950 under the supervision of Richard Rado and Charles Coulson. Partial deafness prevented him from joining the Navy, and instead, in 1951, he took a position with the Straits Trading Company in Singapore assaying tin. Soon thereafter he joined the mathematics faculty at the University of Malaya in Singapore, where Alexander Oppenheim and Richard K. Guy were already working. In 1958, Milner took a sabbatical at the University of Reading, and in 1961 he took a lecturership there and began his doctoral studies; he obtained a Ph.D. from the University of London in 1963. He joined his former Singapore colleagues Guy and Peter Lancaster as a professor at the University of Calgary in 1967, where he was head of the mathematics department from 1976 to 1980. In 1973, he became a Canadian citizen, and in 1976 he became a fellow of the Royal Society of Canada. In 1974 he was a Plenary Speaker of the International Congress of Mathematicians in Vancouver. In 1954, while in Singapore, Milner married Esther Stella (Estelle) Lawton, whom he had known as a London student; they had four children who were Paul Milner, Mark Milner, Suzanne Milner, and Simon Milner. Estelle died of cancer in 1975, and in 1979 Milner married Elizabeth Forsyth Borthwick, with whom he had his son Robert Milner. Research Milner's interest in set theory was sparked by visits of Paul Erdős to Singapore and by meeting András Hajnal while on sabbatical in Reading. He ge
https://en.wikipedia.org/wiki/Balmis%20Expedition
The Royal Philanthropic Vaccine Expedition (), commonly referred to as the Balmis Expedition, was a Spanish healthcare mission that lasted from 1803 to 1806, led by Dr Francisco Javier de Balmis, which vaccinated millions of inhabitants of Spanish America and Asia against smallpox. The vaccine was actually transported through children: orphaned boys who sailed with the expedition. Background Smallpox, a devastating disease that was endemic throughout much of the Old World, decimated the populations of the Americas after it was introduced by the Spanish conquistadors in the 1500s. By the late 16th century, smallpox had become endemic throughout Spain's holdings in the Americas and epidemics occurred periodically over the next 300 years. In the 18th century there were scattered attempts in the colonies to use variolation, an older, less-effective method of inoculation using smallpox material. These efforts did little to mitigate the epidemics and sometimes actually increased the spread of the contagion. In 1798, English physician Edward Jenner pioneered the use of a vaccine to immunize persons with an inoculation of cowpox material. The new vaccine was a much safer and more effective way to prevent smallpox. At the time, about 400,000 Europeans died each year from the disease which was also responsible for one-third of all cases of blindness in Europe. The use of Jenner's new vaccine spread rapidly through Europe and had a significant positive impact on the severity and frequency of smallpox epidemics. The vaccine first reached Spain in late 1800 and by the end of 1801, several thousand persons had been vaccinated across the country. The efforts were well publicised in Spain and by 1804, dozens of papers, treatises, newspaper articles, and editorials had been published on the smallpox vaccine. Francisco Javier de Balmis was a military physician. Expedition In November 1794 the daughter of King Charles IV of Spain, Infanta Maria Teresa, had died from smallpox.
https://en.wikipedia.org/wiki/Carbon%E2%80%93nitrogen%20bond
A carbon–nitrogen bond is a covalent bond between carbon and nitrogen and is one of the most abundant bonds in organic chemistry and biochemistry. Nitrogen has five valence electrons and in simple amines it is trivalent, with the two remaining electrons forming a lone pair. Through that pair, nitrogen can form an additional bond to hydrogen making it tetravalent and with a positive charge in ammonium salts. Many nitrogen compounds can thus be potentially basic but its degree depends on the configuration: the nitrogen atom in amides is not basic due to delocalization of the lone pair into a double bond and in pyrrole the lone pair is part of an aromatic sextet. Similar to carbon–carbon bonds, these bonds can form stable double bonds, as in imines; and triple bonds, such as nitriles. Bond lengths range from 147.9 pm for simple amines to 147.5 pm for C-N= compounds such as nitromethane to 135.2 pm for partial double bonds in pyridine to 115.8 pm for triple bonds as in nitriles. A CN bond is strongly polarized towards nitrogen (the electronegativities of C and N are 2.55 and 3.04, respectively) and subsequently molecular dipole moments can be high: cyanamide 4.27 D, diazomethane 1.5 D, methyl azide 2.17, pyridine 2.19. For this reason many compounds containing CN bonds are water-soluble. N-philes are group of radical molecules which are specifically attracted to the C=N bonds. Carbon-nitrogen bond can be analyzed by X-ray photoelectron spectroscopy (XPS). Depending on the bonding states the peak positions differ in N1s XPS spectra. Nitrogen functional groups See also Cyanide Other carbon bonds with group 15 elements: carbon–nitrogen bonds, carbon–phosphorus bonds Other carbon bonds with period 2 elements: carbon–lithium bonds, carbon–beryllium bonds, carbon–boron bonds, carbon–carbon bonds, carbon–nitrogen bonds, carbon–oxygen bonds, carbon–fluorine bonds Carbon–hydrogen bond
https://en.wikipedia.org/wiki/Mixmath
mi×ma+h (or Mixmath) is a Canadian board game developed by Wrebbit and published in 1987. It resembles a variant of Scrabble in that tiles are placed on a crossword-style grid, with special premiums such as squares that double or triple the value of a tile and a 50-point bonus for playing all seven tiles on the player's rack in one turn. Unlike Scrabble, Mixmath uses numbered tiles to generate short equations using simple arithmetic. Wrebbit, maker of Puzz-3D jigsaw puzzles, has since been taken over by Hasbro, and it appears that Mixmath has been discontinued. Gameplay Mixmath is designed for 2 to 4 players in games lasting roughly 60 minutes. Players should be comfortable with the basic operations of addition, subtraction, multiplication, and division. Contents The gameboard is a 14x14 grid. The central four squares are orange and contain the numbers 1 (upper left), 2 (upper right), 3 (lower left), and 4 (lower right)- these squares cannot have tiles played upon them, and are the basis for beginning the game. There are special blue squares scattered throughout the board which contain an arithmetic symbol (a plus, minus, multiplication, or division sign), as well as premium squares labelled as 2x (green) and 3x (red). There are 108 tiles included with the game, 2 of which are blank and can be used as replacements. Of the 106 playable tiles, there are seven each of the numbers 1 through 10, one 0, one each of the numbers 11 through 20, and one each of every number between 20 and 99 that can be represented as a multiplication of two numbers between 1 and 10 (for example, 21, 24, 25). There are also four racks that can fit seven tiles, and a pouch from which to draw tiles. Starting the game After all the tiles are placed in the pouch, each player draws a single tile. The player with the highest number goes first. Each player places their drawn tile on their rack, and draws six more, for a total of seven. The first move by the first player must use two of
https://en.wikipedia.org/wiki/Affinity%20maturation
In immunology, affinity maturation is the process by which TFH cell-activated B cells produce antibodies with increased affinity for antigen during the course of an immune response. With repeated exposures to the same antigen, a host will produce antibodies of successively greater affinities. A secondary response can elicit antibodies with several fold greater affinity than in a primary response. Affinity maturation primarily occurs on membrane immunoglobulin of germinal center B cells and as a direct result of somatic hypermutation (SHM) and selection by TFH cells. In vivo The process is thought to involve two interrelated processes, occurring in the germinal centers of the secondary lymphoid organs: Somatic hypermutation: Mutations in the variable, antigen-binding coding sequences (known as complementarity-determining regions (CDR)) of the immunoglobulin genes. The mutation rate is up to 1,000,000 times higher than in cell lines outside the lymphoid system. Although the exact mechanism of the SHM is still not known, a major role for the activation-induced (cytidine) deaminase has been discussed. The increased mutation rate results in 1-2 mutations per CDR and, hence, per cell generation. The mutations alter the binding specificity and binding affinities of the resultant antibodies. Clonal selection: B cells that have undergone SHM must compete for limiting growth resources, including the availability of antigen and paracrine signals from TFH cells. The follicular dendritic cells (FDCs) of the germinal centers present antigen to the B cells, and the B cell progeny with the highest affinities for antigen, having gained a competitive advantage, are favored for positive selection leading to their survival. Positive selection is based on steady cross-talk between TFH cells and their cognate antigen presenting GC B cell. Because a limited number of TFH cells reside in the germinal center, only highly competitive B cells stably conjugate with TFH cells and thus r
https://en.wikipedia.org/wiki/Parataxonomy
Parataxonomy is a system of labor division for use in biodiversity research, in which the rough sorting tasks of specimen collection, field identification, documentation and preservation are conducted by primarily local, less specialized individuals, thereby alleviating the workload for the "alpha" or "master" taxonomist. Parataxonomy may be used to improve taxonomic efficiency by enabling more expert taxonomists to restrict their activity to the tasks that require their specialist knowledge and skills, which has the potential to expedite the rate at which new taxa may be described and existing taxa may be sorted and discussed. Parataxonomists generally work in the field, sorting collected samples into recognizable taxonomic units (RTUs) based on easily recognized features. The process can be used alone for rapid assessment of biodiversity. Some researchers consider reliance on parataxonomist-generated data to be prone to error depending on the sample, the sorter and the group of organisms in question. Therefore, quantitative studies based on parataxonomic processes may be unreliable and is therefore controversial. Today, the concepts of citizen science and parataxonomy are somewhat overlapping, with unclear distinctions between those employed to provide supplemental services to taxonomists and those who do so voluntarily, whether for personal enrichment or the altruistic desire to make substantive scientific contributions. These terms are occasionally used interchangeably, but some taxonomists maintain that each possess unique differences. History of concept A "parataxonomist" is a term coined by Dr. Daniel Janzen and Dr. Winnie Hallwachs in the late 1980s who used it to describe the role of assistants working at INBio in Costa Rica. It describes a person who collects specimens for ecological studies as well as the basic information for a specimen as it is being collected in the field. Information they collect includes date, location (lat/long), collector's name
https://en.wikipedia.org/wiki/Math%20rock
Math rock is a style of alternative and indie rock with roots in bands such as King Crimson and Rush. It is characterized by complex, atypical rhythmic structures (including irregular stopping and starting), counterpoint, odd time signatures, and extended chords. It bears similarities to post-rock. Characteristics Math rock is typified by its rhythmic complexity, seen as mathematical in character by listeners and critics. While most rock music uses a meter (however accented or syncopated), math rock makes use of more non-standard, frequently changing time signatures such as , , , or . As in traditional rock, the sound is most often dominated by guitars and drums. However, drums play a greater role in math rock in providing driving, complex rhythms. Math rock guitarists make use of tapping techniques and loop pedals to build on these rhythms, as illustrated by songs like those of "math rock supergroup" Battles. Lyrics are generally not the focus of math rock; the voice is treated as just another instrument in the mix. Often, vocals are not overdubbed, and are positioned less prominently, as in the recording style of Steve Albini. Many of math rock's best-known groups are entirely instrumental such as Don Caballero or Hella. The term began as a joke but has developed into the accepted name for the musical style. One advocate of this is Matt Sweeney, singer with Chavez, a group often linked to the math rock scene. Despite this, not all critics see math rock as a serious sub-genre of rock. A significant intersection exists between math rock and emo, exemplified by bands such as Tiny Moving Parts or American Football, whose sound has been described as "twinkly, mathy rock, a sound that became one of the defining traits of the emo scene throughout the 2000s". Bands Early The albums Red and Discipline by King Crimson, Spiderland by Slint are generally considered seminal influences on the development of math rock. The Canadian punk rock group Nomeansno (founded
https://en.wikipedia.org/wiki/Hypersexuality
Hypersexuality is a term used for a presumed mental disorder causing people to engage in or think about sex to a point of distress or impairment. It is controversial whether it should be included as a clinical diagnosis used by mental healthcare professionals. Nymphomania and satyriasis were terms previously used for the condition in women and men, respectively. Hypersexuality may be a primary condition, or the symptom of another medical disease or condition; for example, Klüver–Bucy syndrome or bipolar disorder. Hypersexuality may also present as a side effect of medication such as drugs used to treat Parkinson's disease. Clinicians have yet to reach a consensus over how best to describe hypersexuality as a primary condition, or to determine the appropriateness of describing such behaviors and impulses as a separate pathology. Hypersexual behaviors are viewed variously by clinicians and therapists as a type of obsessive-compulsive disorder (OCD) or "OCD-spectrum disorder", an addiction, or a disorder of impulsivity. A number of authors do not acknowledge such a pathology and instead assert that the condition merely reflects a cultural dislike of exceptional sexual behavior. Consistent with there not being any consensus over what causes hypersexuality, authors have used many different labels to refer to it, sometimes interchangeably, but often depending on which theory they favor or which specific behavior they were studying; related or obsolete terms include compulsive masturbation, compulsive sexual behavior, cybersex addiction, erotomania, "excessive sexual drive", hyperphilia, hypersexuality, hypersexual disorder, problematic hypersexuality, sexual addiction, sexual compulsivity, sexual dependency, sexual impulsivity, "out of control sexual behavior", and paraphilia-related disorder. Causes There is little consensus among experts as to the causes of hypersexuality. Some research suggests that some cases can be linked to biochemical or physiological changes t
https://en.wikipedia.org/wiki/Posthumous%20sperm%20retrieval
Posthumous sperm retrieval (PSR) is a procedure in which spermatozoa are collected from the testes of a human corpse after brain death. There has been significant debate over the ethics and legality of the procedure, and on the legal rights of the child and surviving parent if the gametes are used for impregnation. Cases of post-mortem conception have occurred ever since human artificial insemination techniques were devised, via sperm donation to a sperm bank after the death of the donor. While religious objections have been made even under these circumstances, far more censure has arisen regarding invasive retrieval from fresh cadavers or patients either on life support or in a persistent vegetative state, particularly when the procedure is carried out without explicit consent from the donor. Cases The first successful retrieval of sperm from a cadaver was reported in 1980, in a case involving a 30-year-old man who became brain dead following a motor vehicle accident and whose family requested sperm preservation. The first successful conception using sperm retrieved post-mortem was reported in 1998, leading to a successful birth the following year. Since 1980, a number of requests for the procedure have been made, with around one third approved and performed. Gametes have been extracted through a variety of means, including removal of the epididymis, irrigation or aspiration of the vas deferens, and rectal probe electroejaculation. Since the procedure is rarely performed, studies on the efficacy of the various methods have been fairly limited in scope. While medical literature recommends that extraction take place no later than 24 hours after death, motile sperm has been successfully obtained as late as 36 hours after death, generally regardless of the cause of death or method of extraction. Up to this limit, the procedure has a high success rate, with sperm retrieved in nearly 100% of cases, and motile sperm in 80–90%. There is currently little precedent for
https://en.wikipedia.org/wiki/BrightSide%20Technologies
BrightSide Technologies Inc. (formerly Sunnybrook Technologies) was a firm spun-out from the Structured Surface Physics Laboratory of the University of British Columbia, developing and commercializing electronic display technologies, specifically high brightness display technology called HDR. The privately held company also developed technology for capturing, processing, and storage of HDR images. BrightSide's headquarters were in Vancouver, British Columbia, Canada. It was acquired by Dolby Laboratories, Inc. in April 2007 for US$28 million. The chief executive officer of BrightSide Technologies Inc. was Richard MacKellar. The chief technology officer was Helge Seetzen, who went on to join Dolby as becoming director of HDR Technology at Dolby (2008 to 2010), before going on to found TandemLaunch, a Montreal-based technology incubator. The main electronic display technology developed by BrightSide was based on IMLED-LCD which consisted of a LCD with an array of individually modulated LED semiconductors as the backlights, instead of cold cathode fluorescent lamps (CCFL) that diffuse light in a layer of plastic. Each LED has 256 brightness steps, where step 0 switches the LED off and step 255 switches it to maximum luminance. As a result, the device can display true black and very bright white, with a contrast ratio technically of infinity, where minimal luminance is 0 cd/m2 (the denominator) and maximal luminance is almost 4,000 cd/m2. To address the confusion that may accompany a display with a quoted contrast ratio of infinity, Brightside calculates its quoted contrast ratio using the next-darkest level available on the display to arrive at a contrast ratio of 200,000:1. BrightSide produced a prototype display to showcase their technology: the DR37-P. Targeted industries include the medical field, CAD, film post-production, geophysical data and satellite imaging. On April 25, 2007 Dolby Laboratories, Inc. announced that it completed the acquisition of Brights
https://en.wikipedia.org/wiki/Kentaro%20Yano%20%28mathematician%29
Kentaro Yano (1 March 1912 in Tokyo, Japan – 25 December 1993) was a mathematician working on differential geometry who introduced the Bochner–Yano theorem. He also published a classical book about geometric objects (i.e., sections of natural fiber bundles) and Lie derivatives of these objects. Publications Les espaces à connexion projective et la géométrie projective des paths, Iasi, 1938 Geometry of Structural Forms (Japanese), 1947 Groups of Transformations in Generalized Spaces, Tokyo, Akademeia Press, 1949 with Salomon Bochner: Curvature and Betti Numbers, Princeton University Press, Annals of Mathematical Studies, 1953 2020 reprint Differential geometry on complex and almost complex spaces, Macmillan, New York 1965 Integral formulas in Riemannian Geometry, Marcel Dekker, New York 1970 with Shigeru Ishihara: Tangent and cotangent bundles: differential geometry, New York, M. Dekker 1973 with Masahiro Kon: Anti-invariant submanifolds, Marcel Dekker, New York 1976 Morio Obata (ed.): Selected papers of Kentaro Yano, North Holland 1982 with Masahiro Kon: CR Submanifolds of Kählerian and Sasakian Manifolds, Birkhäuser 1983 2012 reprint with Masahiro Kon: Structures on Manifolds, World Scientific 1984
https://en.wikipedia.org/wiki/Attacking%20Faulty%20Reasoning
Attacking Faulty Reasoning is a textbook on logical fallacies by T. Edward Damer that has been used for many years in a number of college courses on logic, critical thinking, argumentation, and philosophy. It explains 60 of the most commonly committed fallacies. Each of the fallacies is concisely defined and illustrated with several relevant examples. For each fallacy, the text gives suggestions about how to address or to "attack" the fallacy when it is encountered. The organization of the fallacies comes from the author’s own fallacy theory, which defines a fallacy as a violation of one of the five criteria of a good argument: the argument must be structurally well-formed; the premises must be relevant; the premises must be acceptable; the premises must be sufficient in number, weight, and kind; there must be an effective rebuttal of challenges to the argument. Each fallacy falls into at least one of Damer's five fallacy categories, which derive from the above criteria. The five fallacy categories Fallacies that violate the structural criterion. The structural criterion requires that one who argues for or against a position should use an argument that meets the fundamental structural requirements of a well-formed argument, using premises that are compatible with one another, that do not contradict the conclusion, that do not assume the truth of the conclusion, and that are not involved in any faulty deductive inference. Fallacies such as begging the question, denying the antecedent, or undistributed middle violate this criterion. Fallacies that violate the relevance criterion. The relevance criterion requires that one who presents an argument for or against a position should attempt to set forth only reasons that are directly related to the merit of the position at issue. Fallacies such as appeal to tradition, appeal to force, or genetic fallacy fail to meet the argumentative demands of relevance. Fallacies that violate the acceptability criterion. The a
https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Mazur%20swindle
In mathematics, the Eilenberg–Mazur swindle, named after Samuel Eilenberg and Barry Mazur, is a method of proof that involves paradoxical properties of infinite sums. In geometric topology it was introduced by and is often called the Mazur swindle. In algebra it was introduced by Samuel Eilenberg and is known as the Eilenberg swindle or Eilenberg telescope (see telescoping sum). The Eilenberg–Mazur swindle is similar to the following well known joke "proof" that 1 = 0: 1 = 1 + (−1 + 1) + (−1 + 1) + ... = 1 − 1 + 1 − 1 + ... = (1 − 1) + (1 − 1) + ... = 0 This "proof" is not valid as a claim about real numbers because Grandi's series 1 − 1 + 1 − 1 + ... does not converge, but the analogous argument can be used in some contexts where there is some sort of "addition" defined on some objects for which infinite sums do make sense, to show that if A + B = 0 then A = B = 0. Mazur swindle In geometric topology the addition used in the swindle is usually the connected sum of knots or manifolds. Example : A typical application of the Mazur swindle in geometric topology is the proof that the sum of two non-trivial knots A and B is non-trivial. For knots it is possible to take infinite sums by making the knots smaller and smaller, so if A + B is trivial then so A is trivial (and B by a similar argument). The infinite sum of knots is usually a wild knot, not a tame knot. See for more geometric examples. Example: The oriented n-manifolds have an addition operation given by connected sum, with 0 the n-sphere. If A + B is the n-sphere, then A + B + A + B + ... is Euclidean space so the Mazur swindle shows that the connected sum of A and Euclidean space is Euclidean space, which shows that A is the 1-point compactification of Euclidean space and therefore A is homeomorphic to the n-sphere. (This does not show in the case of smooth manifolds that A is diffeomorphic to the n-sphere, and in some dimensions, such as 7, there are examples of exotic spheres A with inverses
https://en.wikipedia.org/wiki/Icilin
Icilin (AG-3-5) is a synthetic super-agonist of the transient receptor potential M8 (TRPM8) ion channel. Although structurally not related to menthol, it produces an extreme sensation of cold, both in humans and animals. It is almost 200 times more potent than menthol, and 2.5 times more efficacious. Despite their similar effects, icilin activates the TRPM8 receptor in a different way than menthol does. Icilin has been shown to be effective in the treatment of pruritus in an experimental model of itch. It is now used as a research tool for the study of TRP channels, although despite its high potency it is less selective for TRPM8 over other related ion channels than are other compounds such as WS-12.
https://en.wikipedia.org/wiki/Biothermia
Biothermia is the process of heating living tissue using non-ionizing radiation. Sources can include magnetic (inductive), electromagnetic (radiowaves), or conductive (organic materials). See also Bioelectrogenesis Bioheat transfer Electroreception Biomedical engineering Heat transfer
https://en.wikipedia.org/wiki/Mutilated%20chessboard%20problem
The mutilated chessboard problem is a tiling puzzle posed by Max Black in 1946 that asks: Suppose a standard 8×8 chessboard (or checkerboard) has two diagonally opposite corners removed, leaving 62 squares. Is it possible to place 31 dominoes of size 2×1 so as to cover all of these squares? It is an impossible puzzle: there is no domino tiling meeting these conditions. One proof of its impossibility uses the fact that, with the corners removed, the chessboard has 32 squares of one color and 30 of the other, but each domino must cover equally many squares of each color. More generally, if any two squares are removed from the chessboard, the rest can be tiled by dominoes if and only if the removed squares are of different colors. This problem has been used as a test case for automated reasoning, creativity, and the philosophy of mathematics. History The mutilated chessboard problem is an instance of domino tiling of grids and polyominoes, also known as "dimer models", a general class of problems whose study in statistical mechanics dates to the work of Ralph H. Fowler and George Stanley Rushbrooke in 1937. Domino tilings also have a long history of practical use in pavement design and the arrangement of tatami flooring. The mutilated chessboard problem itself was proposed by philosopher Max Black in his book Critical Thinking (1946), with a hint at the coloring-based solution to its impossibility. It was popularized in the 1950s through later discussions by Solomon W. Golomb (1954), George Gamow and Marvin Stern (1958), Claude Berge (1958), and Martin Gardner in his Scientific American column "Mathematical Games" (1957). The use of the mutilated chessboard problem in automated reasoning stems from a proposal for its use by John McCarthy in 1964. It has also been studied in cognitive science as a test case for creative insight, Black's original motivation for the problem. In the philosophy of mathematics, it has been examined in studies of the nature of mathemati
https://en.wikipedia.org/wiki/Wilms%27%20tumor
Wilms' tumor or Wilms tumor, also known as nephroblastoma, is a cancer of the kidneys that typically occurs in children (rarely in adults), and occurs most commonly as a renal tumor in child patients. It is named after Max Wilms, the German surgeon (1867–1918) who first described it. Approximately 650 cases are diagnosed in the U.S. annually. The majority of cases occur in children with no associated genetic syndromes; however, a minority of children with Wilms' tumor have a congenital abnormality.  It is highly responsive to treatment, with about 90 percent of children being cured. Signs and symptoms Typical signs and symptoms of Wilms' tumor include the following: a painless, palpable abdominal mass loss of appetite abdominal pain fever nausea and vomiting blood in the urine (in about 20% of cases) high blood pressure in some cases (especially if synchronous or metachronous bilateral kidney involvement) Rarely as varicocele Pathogenesis Wilms' tumor has many causes, which can broadly be categorized as syndromic and non-syndromic. Syndromic causes of Wilms' tumor occur as a result of alterations to genes such as the Wilms Tumor 1 (WT1) or Wilms Tumor 2 (WT2) genes, and the tumor presents with a group of other signs and symptoms. Non-syndromic Wilms' tumor is not associated with other symptoms or pathologies. Many, but not all, cases of Wilms' tumor develop from nephrogenic rests, which are fragments of tissue in or around the kidney that develop before birth and become cancerous after birth. In particular, cases of bilateral Wilms' tumor, as well as cases of Wilms' tumor derived from certain genetic syndromes such as Denys-Drash syndrome, are strongly associated with nephrogenic rests. Most nephroblastomas are on one side of the body only and are found on both sides in less than 5% of cases, although people with Denys-Drash syndrome mostly have bilateral or multiple tumors. They tend to be encapsulated and vascularized tumors that do not cross the mid
https://en.wikipedia.org/wiki/Dirac%20spectrum
In mathematics, a Dirac spectrum, named after Paul Dirac, is the spectrum of eigenvalues of a Dirac operator on a Riemannian manifold with a spin structure. The isospectral problem for the Dirac spectrum asks whether two Riemannian spin manifolds have identical spectra. The Dirac spectrum depends on the spin structure in the sense that there exists a Riemannian manifold with two different spin structures that have different Dirac spectra. See also Can you hear the shape of a drum? Dirichlet eigenvalue Spectral asymmetry Angle-resolved photoemission spectroscopy
https://en.wikipedia.org/wiki/Gelato%20Federation
The Gelato Federation (usually just Gelato) was a "global technical community dedicated to advancing Linux on the Intel Itanium platform through collaboration, education, and leadership." Formed in 2001, membership included more than seventy academic and research organizations around the world, including several that operated Itanium-based supercomputers on the Top500 list. The organization was active in projects to enhance the Linux kernel for Itanium and GCC for Itanium. The organization took its name from the Italian dessert gelato, paying homage to this by naming sub-projects Gelato Vanilla and Gelato Coconut for varieties of the dessert. History In late 2001, representatives from seven organizations met with Hewlett-Packard. The institutions were the Bioinformatics Institute, Singapore; Groupe ESIEE, France; Hewlett-Packard Company; National Center for Supercomputing Applications, USA; Tsinghua University, China; University of Illinois at Urbana-Champaign, USA; University of New South Wales, Australia; and University of Waterloo, Canada. These were the founding members of Gelato. Representatives from these organizations met twice a year. The first few meetings (in Palo Alto, California 2001 and Paris 2002) were primarily a "strategy council meeting" where the by-laws and charter were hammered out. The Sydney meeting in October 2002 was the first that included a day of technical presentations. These became a regular feature of the meetings, eventually expanded to conferences, and thus the two conferences each year were entirely composed of technical presentations by vendors and members. The organization apparently ceased operation in 2009. The Itanium processor was discontinued by Intel in 2021. Membership The federation grew markedly after its inception. By April 2007, there were more than 70 members and sponsors around the world. Members were institutions, but there were a few individuals who, because of their contribution to IA-64 on Linux or to
https://en.wikipedia.org/wiki/Sun%E2%80%93Ni%20law
Within theoretical computer science, the Sun–Ni law (or Sun and Ni's law, also known as memory-bounded speedup) is a memory-bounded speedup model which states that as computing power increases the corresponding increase in problem size is constrained by the system’s memory capacity. In general, as a system grows in computational power, the problems run on the system increase in size. Analogous to Amdahl's law, which says that the problem size remains constant as system sizes grow, and Gustafson's law, which proposes that the problem size should scale but be bound by a fixed amount of time, the Sun–Ni law states the problem size should scale but be bound by the memory capacity of the system. Sun–Ni law was initially proposed by Xian-He Sun and Lionel Ni at the Proceedings of IEEE Supercomputing Conference 1990. With the increasing disparity between CPU speed and memory data access latency, application execution time often depends on the memory speed of the system. As predicted by Sun and Ni, data access has become the premier performance bottleneck for high-end computing. From this one can see the intuition behind the law; as system resources increase, applications are often bottlenecked by memory speed and bandwidth, thus an application can achieve a larger speedup by utilizing all the memory capacity in the system. The law can be applied to different layers of a memory hierarchy system, from L1 cache to main memory. Through its memory-bounded function, W=G(M), it reveals the trade-off between computing and memory in algorithm and system architecture design. All three speedup models, Sun–Ni, Gustafson, and Amdahl, provide a metric to analyze speedup for parallel computing. Amdahl’s law focuses on the time reduction for a given fixed-size problem. Amdahl’s law states that the sequential portion of the problem (algorithm) limits the total speedup that can be achieved as system resources increase. Gustafson’s law suggests that it is beneficial to build a large-scale
https://en.wikipedia.org/wiki/Confluence%20%28abstract%20rewriting%29
In computer science, confluence is a property of rewriting systems, describing which terms in such a system can be rewritten in more than one way, to yield the same result. This article describes the properties in the most abstract setting of an abstract rewriting system. Motivating examples The usual rules of elementary arithmetic form an abstract rewriting system. For example, the expression (11 + 9) × (2 + 4) can be evaluated starting either at the left or at the right parentheses; however, in both cases the same result is eventually obtained. If every arithmetic expression evaluates to the same result regardless of reduction strategy, the arithmetic rewriting system is said to be ground-confluent. Arithmetic rewriting systems may be confluent or only ground-confluent depending on details of the rewriting system. A second, more abstract example is obtained from the following proof of each group element equalling the inverse of its inverse: This proof starts from the given group axioms A1–A3, and establishes five propositions R4, R6, R10, R11, and R12, each of them using some earlier ones, and R12 being the main theorem. Some of the proofs require non-obvious, or even creative, steps, like applying axiom A2 in reverse, thereby rewriting "1" to "a−1 ⋅ a" in the first step of R6's proof. One of the historical motivations to develop the theory of term rewriting was to avoid the need for such steps, which are difficult to find by an inexperienced human, let alone by a computer program . If a term rewriting system is confluent and terminating, a straightforward method exists to prove equality between two expressions (also known as terms) s and t: Starting with s, apply equalities from left to right as long as possible, eventually obtaining a term s′. Obtain from t a term t′ in a similar way. If both terms s′ and t′ literally agree, then s and t are proven equal. More importantly, if they disagree, then s and t cannot be equal. That is, any two terms s an
https://en.wikipedia.org/wiki/Dual-homed
Dual-homed or dual-homing can refer to either an Ethernet device that has more than one network interface, for redundancy purposes, or in firewall technology, one of the firewall architectures for implementing preventive security. An example of dual-homed devices are enthusiast computing motherboards that incorporate dual Ethernet network interface cards. Usage In Ethernet LANs, dual-homing is a network topology whereby a networked device is built with more than one network interface. Each interface or port is connected to the network, but only one connection is active at a time. The other connection is activated only if the primary connection fails. Traffic is quickly rerouted to the backup connection in the event of link failure. This feature was designed to provide telecommunications grade reliability and redundancy to Ethernet networks. Multihoming is a more general category, referring to a device having more than one network connection. In firewalls Firewall dual-homing provides the first-line defense and protection technology for keeping untrusted bodies from compromising information security by violating trusted network space. A dual-homed host (or dual-homed gateway) is a system fitted with two network interfaces (NICs) that sits between an untrusted network (like the Internet) and trusted network (such as a corporate network) to provide secure access. Dual-homed is a general term for proxies, gateways, firewalls, or any server that provides secured applications or services directly to an untrusted network. Dual-homed hosts can be seen as a special case of bastion hosts and multi-homed hosts. They fall into the category of application-based firewalls. Dual-homed hosts can act as firewalls provided that they do not forward IP datagrams unconditionally. Other firewall architectures include the network-layer firewall types screening router, screened-host, and screened subnet. See also Multihoming Firewall (computing) Router (computing)
https://en.wikipedia.org/wiki/Wien%27s%20displacement%20law
Wien's displacement law states that the black-body radiation curve for different temperatures will peak at different wavelengths that are inversely proportional to the temperature. The shift of that peak is a direct consequence of the Planck radiation law, which describes the spectral brightness or intensity of black-body radiation as a function of wavelength at any given temperature. However, it had been discovered by Wilhelm Wien several years before Max Planck developed that more general equation, and describes the entire shift of the spectrum of black-body radiation toward shorter wavelengths as temperature increases. Formally, the wavelength version of Wien's displacement law states that the spectral radiance of black-body radiation per unit wavelength, peaks at the wavelength given by: where is the absolute temperature and is a constant of proportionality called Wien's displacement constant, equal to or . This is an inverse relationship between wavelength and temperature. So the higher the temperature, the shorter or smaller the wavelength of the thermal radiation. The lower the temperature, the longer or larger the wavelength of the thermal radiation. For visible radiation, hot objects emit bluer light than cool objects. If one is considering the peak of black body emission per unit frequency or per proportional bandwidth, one must use a different proportionality constant. However, the form of the law remains the same: the peak wavelength is inversely proportional to temperature, and the peak frequency is directly proportional to temperature. There are other formulation's of Wien's displacement law, which are parameterized relative to other quantities. For these alternate formulations, the form of the relationship is similar, but the proportionality constant, , differs. Wien's displacement law may be referred to as "Wien's law", a term which is also used for the Wien approximation. In "Wien's displacement law", the word displacement refers to how t
https://en.wikipedia.org/wiki/Hazard%20symbol
Hazard symbols or warning symbols are recognisable symbols designed to warn about hazardous or dangerous materials, locations, or objects, including electromagnetic fields, electric currents; harsh, toxic or unstable chemicals (acids, poisons, explosives); and radioactivity. The use of hazard symbols is often regulated by law and directed by standards organizations. Hazard symbols may appear with different colors, backgrounds, borders, and supplemental information in order to specify the type of hazard and the level of threat (for example, toxicity classes). Warning symbols are used in many places in lieu of or addition to written warnings as they are quickly recognized (faster than reading a written warning) and more commonly understood (the same symbol can be recognized as having the same meaning to speakers of different languages). List of common symbols Tape with yellow and black diagonal stripes is commonly used as a generic hazard warning. This can be in the form of barricade tape, or as a self-adhesive tape for marking floor areas and the like. In some regions (for instance the UK) yellow tape is buried a certain distance above buried electrical cables to warn future groundworkers of the hazard. Generic warning symbol On roadside warning signs, an exclamation mark is often used to draw attention to a generic warning of danger, hazards, and the unexpected. In Europe and elsewhere in the world (except North America and Australia), this type of sign is used if there are no more-specific signs to denote a particular hazard. When used for traffic signs, it is accompanied by a supplementary sign describing the hazard, usually mounted under the exclamation mark. This symbol has also been more widely adopted for generic use in many other contexts not associated with road traffic. It often appears on hazardous equipment, in instruction manuals to draw attention to a precaution, on tram and train blind spot warning stickers or on natural disaster (earthquake, t
https://en.wikipedia.org/wiki/Substring
In formal language theory and computer science, a substring is a contiguous sequence of characters within a string. For instance, "the best of" is a substring of "It was the best of times". In contrast, "Itwastimes" is a subsequence of "It was the best of times", but not a substring. Prefixes and suffixes are special cases of substrings. A prefix of a string is a substring of that occurs at the beginning of ; likewise, a suffix of a string is a substring that occurs at the end of . The substrings of the string "apple" would be: "a", "ap", "app", "appl", "apple", "p", "pp", "ppl", "pple", "pl", "ple", "l", "le" "e", "" (note the empty string at the end). Substring A string is a substring (or factor) of a string if there exists two strings and such that . In particular, the empty string is a substring of every string. Example: The string ana is equal to substrings (and subsequences) of banana at two different offsets: banana ||||| ana|| ||| ana The first occurrence is obtained with b and na, while the second occurrence is obtained with ban and being the empty string. A substring of a string is a prefix of a suffix of the string, and equivalently a suffix of a prefix; for example, nan is a prefix of nana, which is in turn a suffix of banana. If is a substring of , it is also a subsequence, which is a more general concept. The occurrences of a given pattern in a given string can be found with a string searching algorithm. Finding the longest string which is equal to a substring of two or more strings is known as the longest common substring problem. In the mathematical literature, substrings are also called subwords (in America) or factors (in Europe). Prefix A string is a prefix of a string if there exists a string such that . A proper prefix of a string is not equal to the string itself; some sources in addition restrict a proper prefix to be non-empty. A prefix can be seen as a special case of a substring. Example: The string b
https://en.wikipedia.org/wiki/DMAC1
Transmembrane protein 261 is a protein that in humans is encoded by the TMEM261 gene located on chromosome 9. TMEM261 is also known as C9ORF123 and DMAC1, Chromosome 9 Open Reading Frame 123 and Transmembrane Protein C9orf123 and Distal membrane-arm assembly complex protein 1. Gene features TMEM261 is located at 9p24.1, its length is 91,891 base pairs (bp) on the reverse strand. Its neighbouring gene is PTPRD located at 9p23-p24.3 also on the reverse strand and encodes protein tyrosine phosphatase receptor type delta. TMEM261 has 2 exons and 1 intron, and 6 primary transcript variants; the largest mRNA transcript variant consisting of 742bp with a protein 129 amino acids (aa) in length and 13,500 Daltons (Da) in size, and the smallest coding transcript variant being 381bp with a protein 69aa long and 6,100 Da in size. Protein features TMEM261 is a protein consisting out of 112 amino acids, with a molecular weight of 11.8 kDa. The isoelectric point is predicted to be 10.2, whilst its posttranslational modification value is 9.9. Structure TMEM261 contains a domain of unknown function, DUF4536 (pfam15055), predicted as a helical membrane spanning domain about 45aa (Cys 47- Ser 92) in length with no known domain relationships. Two further transmembrane helical domains are predicted of lengths 18aa (Val 52-Ala 69) and 23aa (Pro 81-Ala 102]). There is also a low complexity region spanning 25aa (Thr 14-Ala 39). The tertiary structure for TMEM261 has not yet been determined. However, its protein secondary structure is mostly composed of coiled-coil regions with beta strands and alpha helices found within the transmembrane and domain of unknown function regions. The N-terminal region of TMEM261 is composed of a disordered region which contains the low complexity region that is not highly conserved amongst orthologues. Modifications A N-myristoylation domain is shown to be present in most TMEM261 protein variants. Post-translational modifications include myristoylati
https://en.wikipedia.org/wiki/The%20Duel%3A%20Test%20Drive%20II
The Duel: Test Drive II is a 1989 racing video game developed by Distinctive Software and published by Accolade for Amiga, Amstrad CPC, Apple IIGS, Commodore 64, MS-DOS, MSX, ZX Spectrum, Atari ST, Sega Genesis and SNES. Gameplay Like the original Test Drive, the focus of The Duel is driving exotic cars through dangerous highways, evading traffic, and trying to escape police pursuits. While the first game in the series had the player simply racing for time in a single scenario, Test Drive II improves upon its predecessor by introducing varied scenery, and giving the player the option of racing against the clock or competing against a computer-controlled opponent. The player initially is given the opportunity to choose a car to drive and a level of difficulty, which in turn determines whether the car will use an automatic or manual transmission—the number of difficulty options varies between gaming platforms. Levels begin with the player's car (and the computer opponent, if selected) idling on a roadway. Primarily these are two to four lane public highways with many turns; each level is different, and they include obstacles such as bridges, cliffs, and tunnels in addition to the other cars already on the road. Each level also has one or more police cars along the course. The goal of each level is to reach the gas station at the end of the course in the least amount of time. Stopping at the gas station is not mandatory, and one could drive past it if inattentive. The consequence of not stopping is running out of gas, and thus losing a car (life). The player begins the game with five lives, one of which is lost each time the player crashes into something. The player is awarded a bonus life for completing a level without crashing or running out of gas. In addition to losing a life, crashing adds thirty seconds to the player's time. Cars could crash into other traffic or off-road obstacles such as trees or by falling off the cliff on one of the mountain levels. They c
https://en.wikipedia.org/wiki/Coxeter%27s%20loxodromic%20sequence%20of%20tangent%20circles
In geometry, Coxeter's loxodromic sequence of tangent circles is an infinite sequence of circles arranged so that any four consecutive circles in the sequence are pairwise mutually tangent. This means that each circle in the sequence is tangent to the three circles that precede it and also to the three circles that follow it. Properties The radii of the circles in the sequence form a geometric progression with ratio where is the golden ratio. This ratio and its reciprocal satisfy the equation and so any four consecutive circles in the sequence meet the conditions of Descartes' theorem. The centres of the circles in the sequence lie on a logarithmic spiral. Viewed from the centre of the spiral, the angle between the centres of successive circles is The angle between consecutive triples of centers is the same as one of the angles of the Kepler triangle, a right triangle whose construction also involves the square root of the golden ratio. History and related constructions The construction is named after geometer H. S. M. Coxeter, who generalised the two-dimensional case to sequences of spheres and hyperspheres in higher dimensions. It can be interpreted as a degenerate special case of the Doyle spiral. See also Apollonian gasket
https://en.wikipedia.org/wiki/Pathovar
A pathovar is a bacterial strain or set of strains with the same or similar characteristics, that is differentiated at infrasubspecific level from other strains of the same species or subspecies on the basis of distinctive pathogenicity to one or more plant hosts. Pathovars are named as a ternary or quaternary addition to the species binomial name, for example the bacterium that causes citrus canker Xanthomonas axonopodis, has several pathovars with different host ranges, X. axonopodis pv. citri is one of them; the abbreviation 'pv.' means pathovar. The type strains of pathovars are pathotypes, which are distinguished from the types (holotype, neotype, etc.) of the species to which the pathovar belongs. See also Infraspecific names in botany Phytopathology Trinomen, infraspecific names in zoology (subspecies only)
https://en.wikipedia.org/wiki/Nasty%20neighbour%20effect
In ethology, the nasty neighbour effect describes the phenomenon whereby territory-holding animals behave more strongly toward familiar conspecific neighbours than to unfamiliar conspecifics. This phenomenon may be generally advantageous to an animal because the heightened response reduces the likelihood of a nearby intruder entering the territory and taking the resources it contains whereas an unfamiliar or distant territory-holder poses less of a threat. This reduced response minimises the time, energy, and risk of injury incurred during territorial encounters with animals which are less of a threat to the territory holder. The nasty neighbour effect is the converse of the dear enemy effect in which some species are less aggressive toward their neighbours than toward unfamiliar strangers. The four-striped grass mouse (Rhabdomys pumilio) is group living with one single breeding male and up to four communally breeding females per group. Groups typically contain several philopatric adult sons (and daughters) that are believed not to breed in their natal group and all group members participate in territorial defence. When aggression in wild group-living male breeders was tested in a neutral test arena, they were nearly five times more aggressive toward their neighbours than toward strangers, leading to the prediction that neighbours are the most important competitors for paternity. Using a molecular parentage analysis it was shown that 28% of offspring are sired by neighbouring males and only 7% by strangers. Colonies of the weaver ant (Oecophylla smaragdina) are able to recognize a greater proportion of workers from neighbouring colonies as non-colony members. When recognized as non-colony members, more aggression is exhibited toward neighbours than non-neighbours. Banded mongoose (Mungos mungo) groups vocalize more and inspect more scent samples in response to olfactory cues of neighbours than strangers. It has been suggested that increased aggression towar
https://en.wikipedia.org/wiki/Cray-4
The Cray-4 was intended to be Cray Computer Corporation's successor to the failed Cray-3 supercomputer. It was marketed to compete with the T90 from Cray Research. CCC went bankrupt in 1995 before any Cray-4 had been delivered. Design The earlier Cray-3 was the first major application of gallium arsenide (GaAs) semiconductors in computing. It was not considered a success, and only one Cray-3 was delivered. Seymour Cray moved on to the Cray-4 design, announcing the design in 1994. The Cray-4 was essentially a shrunk and sped-up version of the Cray-3, consisting of a number of vector processors attached to a fast memory. The Cray-3 supported from four to sixteen processors running at 474 MHz, while the Cray-4 scaled from four to sixty-four processors running at 1 GHz. The final packaging for the Cray-4 was intended to fit into , and was to be tested in the smaller one-CPU "tanks" from the Cray-3. A midrange system included 16 processors, 1,024 megawords (8192 MB) of memory and provided 32 gigaflops for $11 million. The local memory architecture used on the Cray-2 and Cray-3 was dropped, returning to the mass of B- and T- registers on earlier designs, owing to Seymour's lack of success using the local memory effectively. 1994 "Significant technical progress was made during 1994 on the CRAY-4, which takes advantage of technologies and manufacturing processes developed during the design and manufacture of the CRAY-3. The Company announced introduction of the CRAY-4 to the market on November 10, 1994. Several single processor CRAY-4 prototype systems, each with 64 megawords of memory, were undergoing diagnostic testing prior to the Company filing for bankruptcy. The Company began testing individual CRAY-4 modules at the start of 1994 and planned to be able to deliver a 4-processor CRAY-4 prototype system by approximately the end of the second quarter of 1995. Upon filing of bankruptcy, the Company stopped work on the CRAY-4." Legacy The processor with serial number 0
https://en.wikipedia.org/wiki/Yinyu%20Ye
Yinyu Ye (; born 1948) is a Chinese American theoretical computer scientist working on mathematical optimization. He is a specialist in interior point methods, especially in convex minimization and linear programming. He is a professor of Management Science and Engineering and Kwoh-Ting Li Chair Professor of Engineering at Stanford University. He also holds a courtesy appointment in the Department of Electrical Engineering. Ye also is a co-founder of minMax Optimization Inc. Education Yinyu Ye was born in 1948 in Wuhan, Hubei, China. He attended Huazhong University of Science and Technology and graduated with a B.S. in Systems and Control in 1982. He received a Ph.D in Engineering Economic Systems from Stanford University in 1988, under the supervision of George B. Dantzig. Research publications Ye wrote Interior-Point Algorithms: Theory and Analysis. He joined David Luenberger for the third edition of Luenberger's Linear and Nonlinear Programming. In recent years, Ye has developed computational methods and theory using semidefinite programming for practical problems like the localization of network sensors. In computational economics, Ye has also established new complexity results for problems concerning the computation of an economic equilibrium. Awards Ye was a 2009 co-recipient of the John von Neumann Theory Prize. He was elected to the 2006 class of Fellows of the Institute for Operations Research and the Management Sciences. Positions Before joining Stanford University, Ye was a Henry B. Tippie Research Professor at the University of Iowa. Ye is a co-founder of minMax Optimization, a technology company based in Palo Alto and Shanghai focused on creating optimization tools for geospatial and financial problems.
https://en.wikipedia.org/wiki/Donal%20O%27Shea
Donal O'Shea is a Canadian mathematician, who is also noted for his bestselling books. He served as the fifth president of New College of Florida in Sarasota, from July 1, 2012, until June 30, 2021. He was succeeded by Patricia Okker on July 1, 2021. Before coming to New College, he served in various roles at Mount Holyoke College, including professor of mathematics, dean of faculty, and vice president for academic affairs. O'Shea graduated with a B.Sc. from Harvard College, and received a Ph.D. in mathematics from Queen's University in Kingston, Ontario in 1981; his thesis, titled On μ-Equivalent Families of Singularities, was written under the direction of Albert John Coleman. Bibliography Some of his best known books are: The Poincaré Conjecture: In Search of the Shape of the Universe The book has consistently received good reviews. David A. Cox, John Little, and Donal O'Shea: Using algebraic geometry, Graduate Texts in Mathematics, vol. 185, Springer-Verlag, 2005. David A. Cox, John Little, and Donal O'Shea: Ideals, varieties, and algorithms: an introduction to computational algebraic geometry and commutative algebra, 3rd. edition, Springer Verlag, 2007.
https://en.wikipedia.org/wiki/Transverse%20cervical%20artery
The transverse cervical artery (transverse artery of neck or transversa colli artery) is an artery in the neck and a branch of the thyrocervical trunk, running at a higher level than the suprascapular artery. Structure It passes transversely below the inferior belly of the omohyoid muscle to the anterior margin of the trapezius, beneath which it divides into a superficial and a deep branch. It crosses in front of the phrenic nerve and the scalene muscles, and in front of or between the divisions of the brachial plexus, and is covered by the platysma and sternocleidomastoid muscles, and crossed by the omohyoid and trapezius. The transverse cervical artery originates from the thyrocervical trunk, it passes through the posterior triangle of the neck to the anterior border of the levator scapulae muscle, where it divides into deep and superficial branches. Superficial branch Ascending branch Descending branch (also known as superficial cervical artery, which supplies the middle and lateral portions of the trapezius) Deep branch (also called the dorsal scapular artery). Descending branch in older literature. Most often, however, this artery branches directly from the subclavian artery. Function Superficial branch Upon entering the trapezius muscle the superficial branch divides again into an ascending and descending branch. The ascending branch distributes branches to trapezius, and to the neighboring muscles and lymph glands in the neck, and anastomoses with the superficial branch of the descending branch of the occipital artery. The descending branch which is also called as superficial cervical artery, anastomoses with the deep and dorsal scapular artery which in turn links to the subscapular artery. This anastomosis is a ring circulation around the scapula where it continues to the suprascapular artery via the circumflex scapular artery. Deep branch The dorsal scapular artery (or descending scapular artery) is a blood vessel which supplies the levator scap
https://en.wikipedia.org/wiki/NETCONF
The Network Configuration Protocol (NETCONF) is a network management protocol developed and standardized by the IETF. It was developed in the NETCONF working group and published in December 2006 as RFC 4741 and later revised in June 2011 and published as RFC 6241. The NETCONF protocol specification is an Internet Standards Track document. NETCONF provides mechanisms to install, manipulate, and delete the configuration of network devices. Its operations are realized on top of a simple Remote Procedure Call (RPC) layer. The NETCONF protocol uses an Extensible Markup Language (XML) based data encoding for the configuration data as well as the protocol messages. The protocol messages are exchanged on top of a secure transport protocol. The NETCONF protocol can be conceptually partitioned into four layers: The Content layer consists of configuration data and notification data. The Operations layer defines a set of base protocol operations to retrieve and edit the configuration data. The Messages layer provides a mechanism for encoding remote procedure calls (RPCs) and notifications. The Secure Transport layer provides a secure and reliable transport of messages between a client and a server. The NETCONF protocol has been implemented in network devices such as routers and switches by some major equipment vendors. One particular strength of NETCONF is its support for robust configuration change using transactions involving a number of devices. History The IETF developed the Simple Network Management Protocol (SNMP) in the late 1980s and it proved to be a very popular network management protocol. In the early part of the 21st century it became apparent that in spite of what was originally intended, SNMP was not being used to configure network equipment, but was mainly being used for network monitoring. In June 2002, the Internet Architecture Board and key members of the IETF's network management community got together with network operators to discuss the situati
https://en.wikipedia.org/wiki/Mantle%20oxidation%20state
Mantle oxidation state (redox state) applies the concept of oxidation state in chemistry to the study of the Earth's mantle. The chemical concept of oxidation state mainly refers to the valence state of one element, while mantle oxidation state provides the degree of decreasing of increasing valence states of all polyvalent elements in mantle materials confined in a closed system. The mantle oxidation state is controlled by oxygen fugacity and can be benchmarked by specific groups of redox buffers. Mantle oxidation state changes because of the existence of polyvalent elements (elements with more than one valence state, e.g. Fe, Cr, V, Ti, Ce, Eu, C and others). Among them, Fe is the most abundant (~8 wt% of the mantle) and its oxidation state largely reflects the oxidation state of mantle. Examining the valence state of other polyvalent elements could also provide the information of mantle oxidation state. It is well known that the oxidation state can influence the partitioning behavior of elements and liquid water between melts and minerals, the speciation of C-O-H-bearing fluids and melts, as well as transport properties like electrical conductivity and creep. The formation of diamond requires both reaching high pressures and high temperatures and a carbon source. The most common carbon source in deep Earth is not elemental carbon and redox reactions need to be involved in diamond formation. Examining the oxidation state can help us predict the P-T conditions of diamond formation and elucidate the origin of deep diamonds. Thermodynamic description of oxidation state Mantle oxidation state can be quantified as the oxygen fugacity () of the system within the framework of thermodynamics. A higher oxygen fugacity implies a more oxygen-rich and more oxidized environment. At each given pressure-temperature conditions, for any compound or element M that bears the potential to be oxidized by oxygen, we can write For example, if M is Fe, the redox equilibrium rea
https://en.wikipedia.org/wiki/Opioid%20food%20peptides
Opioid food peptides include: Casomorphin (from milk) Gluten exorphin (from gluten) Gliadorphin/gluteomorphin (from gluten) Rubiscolin (from spinach) Soymorphin-5 (from soy) Oryzatensin (from rice) Peptides Opioids
https://en.wikipedia.org/wiki/Most%20probable%20number
The most probable number method, otherwise known as the method of Poisson zeroes, is a method of getting quantitative data on concentrations of discrete items from positive/negative (incidence) data. Purpose There are many discrete entities that are easily detected but difficult to count. Any sort of amplification reaction or catalysis reaction obliterates easy quantification but allows presence to be detected very sensitively. Common examples include microorganism growth, enzyme action, or catalytic chemistry. The MPN method involves taking the original solution or sample, and subdividing it by orders of magnitude (frequently 10× or 2×), and assessing presence/absence in multiple subdivisions. The degree of dilution at which absence begins to appear indicates that the items have been diluted so much that there are many subsamples in which none appear. A suite of replicates at any given concentration allow finer resolution, to use the number of positive and negative samples to estimate the original concentration within the appropriate order of magnitude. Applications In microbiology, the cultures are incubated and assessed by eye, bypassing tedious colony counting or expensive and tedious microscopic counts. Presumptive, confirmative and completed tests are a part of MPN. In molecular biology, a common application involves DNA templates diluted into polymerase chain reactions (PCR). Reactions only proceed when a template is present, allowing for a form of quantitative PCR, to assess the original concentration of template molecules. Another application involves diluting enzyme stocks into solution containing a chromogenic substrate, or diluting antigens into solutions for ELISA (Enzyme-Linked ImmunoSorbent Assay) or some other antibody cascade detection reaction, to measure the original concentration of the enzyme or antigen. Weakness and importance The major weakness of MPN methods is the need for large numbers of replicates at the appropriate dilution to
https://en.wikipedia.org/wiki/Lindy%20effect
The Lindy effect (also known as Lindy's Law) is a theorized phenomenon by which the future life expectancy of some non-perishable things, like a technology or an idea, is proportional to their current age. Thus, the Lindy effect proposes the longer a period something has survived to exist or be used in the present, the longer its remaining life expectancy. Longevity implies a resistance to change, obsolescence or competition and greater odds of continued existence into the future. Where the Lindy effect applies, mortality rate decreases with time. Mathematically, the Lindy effect corresponds to lifetimes following a Pareto probability distribution. The concept is named after Lindy's delicatessen in New York City, where the concept was informally theorized by comedians. The Lindy effect has subsequently been theorized by mathematicians and statisticians. Nassim Nicholas Taleb has expressed the Lindy effect in terms of "distance from an absorbing barrier". The Lindy effect applies to "non-perishable" items, those that do not have an "unavoidable expiration date". For example, human beings are perishable: the life expectancy at birth in developed countries is about 80 years. So the Lindy effect does not apply to individual human lifespan: it is unlikely for a 5-year-old human to die within the next 5 years, but it is very likely for a 70-year-old human to die within the next 70 years, while the Lindy effect would predict these to have equal probability. History The origin of the term can be traced to Albert Goldman and a 1964 article he had written in The New Republic titled "Lindy's Law." The term Lindy refers to Lindy's delicatessen in New York, where comedians "foregather every night [to] conduct post-mortems on recent show business 'action.'" In this article, Goldman describes a folkloric belief among New York City media observers that the amount of material comedians have is constant, and therefore, the frequency of output predicts how long their series will l
https://en.wikipedia.org/wiki/CLIC2
Chloride intracellular channel protein 2 is a protein that in humans is encoded by the CLIC2 gene. Chloride channels are a diverse group of proteins that regulate fundamental cellular processes including stabilization of cell membrane potential, transepithelial transport, maintenance of intracellular pH, and regulation of cell volume. Chloride intracellular channel 2 is a member of the p64 family; the protein is detected in fetal liver and adult skeletal muscle tissue. This gene maps to the candidate region on chromosome X for incontinentia pigmenti. See also Chloride channel
https://en.wikipedia.org/wiki/Common%20Data%20Representation
Common Data Representation (CDR) is used to represent structured or primitive data types passed as arguments or results during remote invocations on Common Object Request Broker Architecture (CORBA) distributed objects. It enables clients and servers written in different programming languages to work together. For example, it translates little-endian to big-endian. It assumes prior agreement on type, so no information is given with data representation in messages. External links Official CDR spec (see PDF page 4). ACE Library provides CDR streams. Common Object Request Broker Architecture Data serialization formats
https://en.wikipedia.org/wiki/P-Aminobenzoyl-glutamate%20transporter
The p-aminobenzoyl-glutamate transporter (AbgT) family (TC# 2.A.68) is a family of transporter proteins belonging to the ion transporter (IT) superfamily. The AbgT family consists of the AbgT (YdaH; TC# 2.A.68.1.1) protein of E. coli and the MtrF drug exporter (TC# 2.A.68.1.2) of Neisseria gonorrhoeae. The former protein is apparently cryptic in wild-type cells, but when expressed on a high copy number plasmid, or when expressed at higher levels due to mutation, it appeared to allow uptake (Km = 123 nM; see Michaelis–Menten kinetics) and subsequent utilization of p-aminobenzoyl-glutamate as a source of p-aminobenzoate for p-aminobenzoate auxotrophs. p-Aminobenzoate is a constituent of and a precursor for the biosynthesis of folic acid. MtrF was annotated as a putative drug efflux pump. Structure AbgT is 510 amino acyl residues long and has 12-13 putative transmembrane α-helical spanners (TMSs). MtrF is 522 aas long and has 11 or 12 putative TMSs. The 3-d structures of MtrF and a YdaH homologue have been solved, and functional studies show that it is a drug exporter. The 3-d structure shows that it has 9 TMSs with hairpin entry loops. Crystal Structures: Genetics The abgT gene is preceded by two genes, abgA and abgB, which code for homologous amino acyl amino hydrolases and hydrolyze p-aminobenzoyl glutamate to p-aminobenzoate and glutamate. Because of the structural similarity of p-aminobenzoyl-glutatmate to peptides, and the enzymatic activities of the abgA and abgB gene products, it has been suggested that AbgT is also a peptide transporter. Demonstration of an energy requirement suggested an H+-dependent mechanism. Expression of these genes is regulated by AbgR and an unknown effector. Function As noted above, the AbgT family of transporters has been thought to contribute to bacterial folate biosynthesis by importing the catabolite p-aminobenzoyl-glutamate for producing folate. Approximately 13,000 putative family members were identified in 2015. The X-ra
https://en.wikipedia.org/wiki/Floppy%20disk%20hardware%20emulator
A floppy disk hardware emulator or semi-virtual diskette (SVD) is a device that emulates a floppy disk drive with a solid state or network storage device that is plug compatible with the drive it replaces, similar to how solid-state drives replace mechanical hard disk drives. History Older models of computers, electronic musical instruments and industrial automation often used floppy disk drives for data transfer. Older equipment may be difficult to replace or upgrade because of cost, requirement for continuous availability or unavailable upgrades. Proper operation may require operating system, software and data to be read and written from and to floppies, forcing users to maintain floppy drives on supporting systems. Floppy disks and floppy drives are gradually going out of production, and replacement of malfunctioning drives, and the systems hosting them, is becoming increasingly difficult. Floppy disks themselves are fragile, or may need to be replaced often. An alternative is to use a floppy disk hardware emulator, a device which appears to be a standard floppy drive to the old equipment by interfacing directly to the floppy disk controller, while storing data in another medium such as a USB thumb drive, Secure Digital card, or a shared drive on a computer network. Emulators can also be used as a higher-performance replacement for mechanical floppy disk drives. Emulation process A typical floppy disk controller sends an MFM / FM / GCR encoded signal to the drive to write data, and expects a similar signal returned when reading the drive. On a write, a hardware PLL or a software-based filter component undoes the encoding, and stores the sector data as logically written by the host. An inverse mechanism translates the stored data back into an encoded signal when the data is read. Noisy raw data signals are filtered and cleaned up before conversion. Most FDC interfaces do not directly address tracks; instead they provide "step-in" and "step-out" pulses. Tho
https://en.wikipedia.org/wiki/Widom%20insertion%20method
The Widom insertion method is a statistical thermodynamic approach to the calculation of material and mixture properties. It is named for Benjamin Widom, who derived it in 1963. In general, there are two theoretical approaches to determining the statistical mechanical properties of materials. The first is the direct calculation of the overall partition function of the system, which directly yields the system free energy. The second approach, known as the Widom insertion method, instead derives from calculations centering on one molecule. The Widom insertion method directly yields the chemical potential of one component rather than the system free energy. This approach is most widely applied in molecular computer simulations but has also been applied in the development of analytical statistical mechanical models. The Widom insertion method can be understood as an application of the Jarzynski equality since it measures the excess free energy difference via the average work needed to perform, when changing the system from a state with N molecules to a state with N+1 molecules. Therefore it measures the excess chemical potential since , where . Overview As originally formulated by Benjamin Widom in 1963, the approach can be summarized by the equation: where is called the insertion parameter, is the number density of species , is the activity of species , is the Boltzmann constant, and is temperature, and is the interaction energy of an inserted particle with all other particles in the system. The average is over all possible insertions. This can be understood conceptually as fixing the location of all molecules in the system and then inserting a particle of species at all locations through the system, averaging over a Boltzmann factor in its interaction energy over all of those locations. Note that in other ensembles like for example in the semi-grand canonical ensemble the Widom insertion method works with modified formulas. Relation to other thermo
https://en.wikipedia.org/wiki/James%20Olds
James Olds (May 30, 1922 – August 21, 1976) was an American psychologist who co-discovered the pleasure center of the brain with Peter Milner while he was a postdoctoral fellow at McGill University in 1954. He is considered to be one of the founders of modern neuroscience and received numerous distinctions ranging from election to the United States National Academy of Sciences to the Newcomb Cleveland Prize of the American Association for the Advancement of Science. Biography Early life and education Olds was born in Chicago, Illinois, and grew up in Nyack, New York. His father, Leland Olds, later became chairman of the Federal Power Commission under President Franklin D. Roosevelt. His grandfather George D. Olds was the ninth president of Amherst College. Olds attended college at a number of schools including St. John's College, Annapolis, and the University of Wisconsin but received his undergraduate B.A. from Amherst College in 1947. His undergraduate years were interrupted by military service in the U.S. Army during the Second World War as part of the Persian Gulf Command. Following the war, Olds went on to get his Ph.D. at Harvard University in the Department of Social Relations under Professor Richard L. Solomon. His thesis was focused on motivation and led to his subsequent interest in the biological basis of motivation. Career Following his Ph.D., Olds went on to do postdoctoral work at McGill University under Donald Olding Hebb, where he made his most important discovery with Peter Milner. Subsequently, Olds moved to UCLA, where he took his first academic appointment at the Brain Research Institute. In 1957 Olds was appointed associate professor in the Department of Psychology at the University of Michigan. He left Michigan in 1969 to become the Bing Professor of Behavioral Biology at the California Institute of Technology where he continued his research and led a large lab until his death in a swimming accident in August 1976. His last work was aimed a
https://en.wikipedia.org/wiki/Powder-actuated%20tool
A powder-actuated tool (PAT, often generically called a Hilti gun or a Ramset gun after their manufacturing companies) is a type of nail gun used in construction and manufacturing to join materials to hard substrates such as steel and concrete. Known as direct fastening or explosive fastening, this technology is powered by a controlled explosion of a small chemical propellant charge, similar to the process that discharges a firearm. Features Powder-actuated tools are often used because of their speed of operation, compared to other processes such as drilling and then installing a threaded fastener. They can more easily be used in narrow or awkward locations, such as installing steel suspension clips into an overhead concrete ceiling. Powder-actuated tools are powered by small explosive cartridges, which are triggered when a firing pin strikes a primer, a sensitive explosive charge in the base of the cartridge. The primer ignites the main charge of powder, which burns rapidly. The hot gases released by the burning of the propellant rapidly build pressure within the cartridge, which pushes either directly on the head of the fastener, or on a piston, accelerating the fastener out of the muzzle. Powder-actuated tools come in high-velocity and low-velocity types. In high-velocity tools, the propellant charge acts directly on the fastener in a process similar to a firearm. Low-velocity tools introduce a piston into the chamber. The propellant acts on the piston, which then drives the fastener into the substrate. The piston is analogous to the bolt of a captive bolt pistol. A tool is considered low velocity if the average test velocity of the fastener is not in excess of with no single test having a velocity of over . A high velocity tool propels or discharges a stud, pin, or fastener in excess of . High-velocity tools made or sold in the United States must comply under certain circumstances; with many being used in the shipbuilding and steel industries. Powder
https://en.wikipedia.org/wiki/Anonymous%20recursion
In computer science, anonymous recursion is recursion which does not explicitly call a function by name. This can be done either explicitly, by using a higher-order function – passing in a function as an argument and calling it – or implicitly, via reflection features which allow one to access certain functions depending on the current context, especially "the current function" or sometimes "the calling function of the current function". In programming practice, anonymous recursion is notably used in JavaScript, which provides reflection facilities to support it. In general programming practice, however, this is considered poor style, and recursion with named functions is suggested instead. Anonymous recursion via explicitly passing functions as arguments is possible in any language that supports functions as arguments, though this is rarely used in practice, as it is longer and less clear than explicitly recursing by name. In theoretical computer science, anonymous recursion is important, as it shows that one can implement recursion without requiring named functions. This is particularly important for the lambda calculus, which has anonymous unary functions, but is able to compute any recursive function. This anonymous recursion can be produced generically via fixed-point combinators. Use Anonymous recursion is primarily of use in allowing recursion for anonymous functions, particularly when they form closures or are used as callbacks, to avoid having to bind the name of the function. Anonymous recursion primarily consists of calling "the current function", which results in direct recursion. Anonymous indirect recursion is possible, such as by calling "the caller (the previous function)", or, more rarely, by going further up the call stack, and this can be chained to produce mutual recursion. The self-reference of "the current function" is a functional equivalent of the "this" keyword in object-oriented programming, allowing one to refer to the current context.
https://en.wikipedia.org/wiki/Domain%20%28software%20engineering%29
A domain is the targeted subject area of a computer program. It is a term used in software engineering. Formally it represents the target subject of a specific programming project, whether narrowly or broadly defined. For example, for a particular programming project that has as a goal the creation of a program for a particular hospital, that hospital would be the domain. Or, the project can be expanded in scope to include all hospitals as its domain. In a computer programming design, you define a domain by delineating a set of common requirements, terminology, and functionality for any software program constructed to solve a problem in the area of computer programming, known as domain engineering. The word domain is also taken as a synonym of application domain. Domain in the realm of software engineering commonly refers to the subject area on which the application is intended to apply. In other words, during application development, the domain is the "sphere of knowledge and activity around which the application logic revolves." —Andrew Powell-Morse Domain: A sphere of knowledge, influence, or activity. The subject area to which the user applies a program is the domain of the software. —Eric Evans See also Domain-driven design Domain-specific programming language Domain model Programming domain
https://en.wikipedia.org/wiki/IBM%20System/390
The IBM System/390 is a discontinued mainframe product family implementing ESA/390, the fifth generation of the System/360 instruction set architecture. The first computers to use the ESA/390 were the Enterprise System/9000 (ES/9000) family, which were introduced in 1990. These were followed by the 9672, Multiprise, and Integrated Server families of System/390 in 1994–1999, using CMOS microprocessors. The ESA/390 succeeded ESA/370, used in the Enhanced 3090 and 4381 "E" models, and the System/370 architecture last used in the IBM 9370 low-end mainframe. ESA/390 was succeeded by the 64-bit z/Architecture in 2000. History On September 5, 1990, IBM published a group of hardware and software announcements, two of which included overviews of three announcements: System/390 (S/390), as in 360 for 1960s, 370 for 1970s. Enterprise System/9000 (ES/9000), as in 360 for 1960s, 370 for 1970s. Enterprise Systems Architecture/390 (ESA/390) was IBM's last 31-bit-address/32-bit-data mainframe computing design, copied by Amdahl, Hitachi, and Fujitsu among other competitors. It was the successor of ESA/370 and, in turn, was succeeded by the 64-bit z/Architecture in 2000. Among other things, ESA/390 added fiber optics channels, known as Enterprise Systems Connection (ESCON) channels, to the parallel (Bus and Tag) channels of ESA/370. Despite the fact that IBM mentioned the 9000 family first in some of the day's announcements, it was clear "by the end of the day" that it was "for System/390," although it was a shortened name, S/390, that was placed on some of the actual "boxes" later shipped. The ES/9000 include rack-mounted models, free standing air cooled models and water cooled models. The low end models were substantially less expensive than the 3090 or 4381 previously needed to run MVS/ESA, and could also run VM/ESA and VSE/ESA, which IBM announced at the same time. IBM periodically added named features to ESA/390 in conjunction with new processors; the ESA/390 Principles o
https://en.wikipedia.org/wiki/Jarisch%E2%80%93Herxheimer%20reaction
A Jarisch–Herxheimer reaction is a sudden and typically transient reaction that may occur within 24 hours of being administered antibiotics for an infection by a spirochete, including syphilis, leptospirosis, Lyme disease, and relapsing fever. Signs and symptoms include fever, chills, shivers, feeling sick, headache, fast heart beat, low blood pressure, breathing fast, flushing of skin, muscle aches, and worsening of skin lesions. It may sometimes be mistaken as an allergy to the antibiotic. Jarisch–Herxheimer reactions can be life-threatening because they can cause a significant drop in blood pressure and cause acute end-organ injury, eventually leading to multi-organ failure. Signs and symptoms It comprises part of what is known as sepsis and occurs after initiation of antibacterials when treating Gram-negative infections such as Escherichia coli and louse- and tick-borne infections. It usually manifests in 1–3 hours after the first dose of antibiotics as fever, chills, rigor, hypotension, headache, tachycardia, hyperventilation, vasodilation with flushing, myalgia (muscle pain), exacerbation of skin lesions and anxiety. The intensity of the reaction indicates the severity of inflammation. Reaction commonly occurs within two hours of drug administration, but is usually self-limiting. Causes The Jarisch–Herxheimer reaction is traditionally associated with antimicrobial treatment of syphilis. The reaction is also seen in the other diseases caused by spirochetes: Lyme disease, relapsing fever, and leptospirosis. There have been case reports of the Jarisch–Herxheimer reaction accompanying treatment of other infections, including Q fever, bartonellosis, brucellosis, trichinellosis, and African trypanosomiasis. Pathophysiology Lipoproteins released from treatment of Treponema pallidum infections are believed to induce the Jarisch–Herxheimer reaction. The Herxheimer reaction has shown an increase in inflammatory cytokines during the period of exacerbation, includin
https://en.wikipedia.org/wiki/Five-bar%20linkage
In kinematics, a five-bar linkage is a mechanism with two degrees of freedom that is constructed from five links that are connected together in a closed chain. All links are connected to each other by five joints in series forming a loop. One of the links is the ground or base. This configuration is also called a pantograph, however, it is not to be confused with the parallelogram-copying linkage pantograph. The linkage can be a one-degree-of-freedom mechanism if two gears are attached to two links and are meshed together, forming a geared five-bar mechanism. Robotic configuration When controlled motors actuate the linkage, the whole system (a mechanism and its actuators) becomes a robot. This is usually done by placing two servomotors (to control the two degrees of freedom) at the joints A and B, controlling the angle of the links L2 and L5. L1 is the grounded link. In this configuration, the controlled endpoint or end-effector is the point D, where the objective is to control its x and y coordinates in the plane in which the linkage resides. The angles theta 1 and theta 2 can be calculated as a function of the x,y coordinates of point D using trigonometric functions. This robotic configuration is a parallel manipulator. It is a parallel configuration robot as it is composed of two controlled serial manipulators connected to the endpoint. Unlike a serial manipulator, this configuration has the advantage of having both motors grounded at the base link. As the motor can be quite massive, this significantly decreases the total moment of inertia of the linkage and improves backdrivability for haptic feedback applications. On the other hand, workspace reached by the endpoint is usually significantly smaller than that of a serial manipulator. Kinematics and dynamics Both the forward and inverse kinematics of this robotic configuration can be found in closed-form equations through geometric relationships. Different methods of finding both have been done by Campion and
https://en.wikipedia.org/wiki/Myelomere
A myelomere is the segment of spinal cord to which a given pair of dorsal and ventral roots is attached. Because the adult spinal cord does not extend down as far as the vertebral column does, the lower myelomeres are not opposite their correspondingly numbered vertebrae. Thus myelomere S1 is opposite the T12 vertebra. External links Dartmouth School of Medicine Spinal cord
https://en.wikipedia.org/wiki/Test%20Anything%20Protocol
The Test Anything Protocol (TAP) is a protocol to allow communication between unit tests and a test harness. It allows individual tests (TAP producers) to communicate test results to the testing harness in a language-agnostic way. Originally developed for unit testing of the Perl interpreter in 1987, producers and parsers are now available for many development platforms. History TAP was created for the first version of the Perl programming language (released in 1987), as part of the Perl's core test harness (t/TEST). The Test::Harness module was written by Tim Bunce and Andreas König to allow Perl module authors to take advantage of TAP. It became the de facto standard for Perl testing. Development of TAP, including standardization of the protocol, writing of test producers and consumers, and evangelizing the language is coordinated at the TestAnything website. As a protocol which is agnostic of programming language, TAP unit testing libraries expanded beyond their Perl roots and have been developed for various languages and systems such as PostgreSQL, MySQL, JavaScript and other implementations listed on the project site. A TAP C library is included as part of the FreeBSD Unix distribution and is used in the system's regression test suite. Specification A formal specification for this protocol exists in the TAP::Spec::Parser and TAP::Parser::Grammar modules. The behavior of the Test::Harness module is the de facto TAP standard implementation, along with a writeup of the specification on https://testanything.org. A project to produce an IETF standard for TAP was initiated in August 2008, at YAPC::Europe 2008. Usage examples Here's an example of TAP's general format: 1..48 ok 1 Description # Directive # Diagnostic .... ok 47 Description ok 48 Description For example, a test file's output might look like: 1..4 ok 1 - Input file opened not ok 2 - First line of the input valid. More output from test 2. There can be arbitrary number of lines for any outp
https://en.wikipedia.org/wiki/Simon%20de%20la%20Loub%C3%A8re
Simon de la Loubère (; 21 April 1642 – 26 March 1729) was a French diplomat to Siam (Thailand), writer, mathematician and poet. He is credited with bringing back a document which introduced Europe to Indian astronomy, the "Siamese method" of making magic squares, as well as one of the earliest description of parachutes. Mission to Siam Simon de la Loubère led an embassy to Siam (modern Thailand) in 1687 (the "La Loubère-Céberet mission"). The embassy, composed of five warships, arrived in Bangkok in October 1687 and was received by Ok-khun Chamnan. La Loubère returned to France on board the Gaillard on 3 January 1688, accompanied by the Jesuit Guy Tachard, and a Siamese embassy led by Ok-khun Chamnan. Upon his return, La Loubère wrote a description of his travels, as had been requested by Louis XIV, published under the title Du Royaume de Siam: "It was by the orders, which I had the honours to receive from the King upon leaving for my voyage to Siam, that I observed in that country, as exactly as possible, all that appeared to be the most singular. Loubère also brought back with him an obscure manuscript relating to the astronomical traditions of Siam, which he passed on to the famous French-Italian astronomer Jean Dominique Cassini. The Siamese Manuscript, as it is now called, intrigued Cassini enough so that he spent a couple years deciphering its cryptic contents, determining on the way that the document originated in India. His explication of the manuscript appeared in La Loubere's book on the Kingdom of Siam in 1691, which laid the first foundation of European scholarship on Indian astronomy. French career La Loubère was elected member of the Académie française (1693–1729), where he received Seat 16, following the 1691 publication of his book Du Royaume de Siam. La Loubère was a friend of the German scientist Gottfried Leibniz, and once wrote that he had "no greater joy than (to discuss) philosophy and mathematics" with him (22 January 1681 correspondence
https://en.wikipedia.org/wiki/Organic%20computing
Organic computing is computing that behaves and interacts with humans in an organic manner. The term "organic" is used to describe the system's behavior, and does not imply that they are constructed from organic materials. It is based on the insight that we will soon be surrounded by large collections of autonomous systems, which are equipped with sensors and actuators, aware of their environment, communicate freely, and organize themselves in order to perform the actions and services that seem to be required. The goal is to construct such systems as robust, safe, flexible, and trustworthy as possible. In particular, a strong orientation towards human needs as opposed to a pure implementation of the technologically possible seems absolutely central. In order to achieve these goals, our technical systems will have to act more independently, flexibly, and autonomously, i.e. they will have to exhibit lifelike properties. We call such systems "organic". Hence, an "Organic Computing System" is a technical system which adapts dynamically to exogenous and endogenous change. It is characterized by the properties of self-organization, self-configuration, self-optimization, self-healing, self-protection, self-explaining, and context awareness. It can be seen as an extension of the Autonomic computing vision of IBM. In a variety of research projects the priority research program SPP 1183 of the German Research Foundation (DFG) addresses fundamental challenges in the design of Organic Computing systems; its objective is a deeper understanding of emergent global behavior in self-organizing systems and the design of specific concepts and tools to support the construction of Organic Computing systems for technical applications. See also Biologically inspired computing Autonomic computing
https://en.wikipedia.org/wiki/Multicanonical%20ensemble
In statistics and physics, multicanonical ensemble (also called multicanonical sampling or flat histogram) is a Markov chain Monte Carlo sampling technique that uses the Metropolis–Hastings algorithm to compute integrals where the integrand has a rough landscape with multiple local minima. It samples states according to the inverse of the density of states, which has to be known a priori or be computed using other techniques like the Wang and Landau algorithm. Multicanonical sampling is an important technique for spin systems like the Ising model or spin glasses. Motivation In systems with a large number of degrees of freedom, like spin systems, Monte Carlo integration is required. In this integration, importance sampling and in particular the Metropolis algorithm, is a very important technique. However, the Metropolis algorithm samples states according to where beta is the inverse of the temperature. This means that an energy barrier of on the energy spectrum is exponentially difficult to overcome. Systems with multiple local energy minima like the Potts model become hard to sample as the algorithm gets stuck in the system's local minima. This motivates other approaches, namely, other sampling distributions. Overview Multicanonical ensemble uses the Metropolis–Hastings algorithm with a sampling distribution given by the inverse of the density of states of the system, contrary to the sampling distribution of the Metropolis algorithm. With this choice, on average, the number of states sampled at each energy is constant, i.e. it is a simulation with a "flat histogram" on energy. This leads to an algorithm for which the energy barriers are no longer difficult to overcome. Another advantage over the Metropolis algorithm is that the sampling is independent of the temperature of the system, which means that one simulation allows the estimation of thermodynamical variables for all temperatures (thus the name "multicanonical": several temperatures). This is a great i
https://en.wikipedia.org/wiki/Frond%20dimorphism
Frond dimorphism refers to a difference in ferns between the fertile and sterile fronds. Since ferns, unlike flowering plants, bear spores on the leaf blade itself, this may affect the form of the frond itself. In some species of ferns, there is virtually no difference between the fertile and sterile fronds, such as in the genus Dryopteris, other than the mere presence of the sori, or fruit-dots, on the back of the fronds. Some other species, such as Polystichum acrostichoides (Christmas fern), or some ferns of the genus Osmunda, feature dimorphism on a portion of the frond only. Others, such as some species of Blechnum and Woodwardia, have fertile fronds that are markedly taller than the sterile. Still others, such as Osmunda cinnamomea (Cinnamon fern), or plants of the family Onocleaceae, have fertile fronds that are completely different from the sterile. Only members of the Onocleaceae and Blechnaceae exhibit a propensity towards dimorphy, while no member of the Athyriaceae is strongly dimorphic, and only some representatives of the Thelypteridaceae have evolved the condition, suggesting a possible close relationship between Onocleaceae and Blechnaceae. Its importance has been disputed - Copeland for example, considered it taxonomically important, whereas Tryon and Tryon and Kramer all stated that the importance can only be judged in relation to other characteristics.
https://en.wikipedia.org/wiki/WinPT
WinPT or Windows Privacy Tray is frontend to the Gnu Privacy Guard (GnuPG) for the Windows platform. Released under GPL, it is compatible with OpenPGP compliant software. WinPT represents a collection of user interface tools designed to ease the use of asymmetric encryption software. Based on GnuPG, and OpenPGP-compatible, WinPT is intended for Windows users to use for everyday message signing, verification, encryption and general key management. If installation defaults are used, WinPT will then reside in the task bar tray, and on the right-click menu within Windows Explorer. A Start menu item includes launchers for a GPG commandline (console), WinPT tray, and documentation. , latest version (1.5.3 Beta) is only compatible with GnuPG 1.4.x and not with the most recent version 2.0.x. WinPT is included in the GnuPT installer (that includes the latest version of GnuPG 1.4.x, WinPT 1.4.3 stable and WinPT latest beta.) History On April 4, 2007, the project's author, Timo Schulz, announced that development on WinPT has been suspended for an indefinite period. However, on October 27, 2008, Schulz announced a new version 1.30, described as a bug fix release. On December 14, 2009, Timo Schulz announced that WinPT is discontinued due to lack of resources. On January 19, 2012, Timo Schulz announced work on a new release and asked the community to contact him in regards to further development past future revision 1.5 if they are interested. On October 21, 2012, Timo Schulz announced that the project had a new dedicated website. See also GNU Privacy Guard Gpg4win PGP Public-key cryptography Cryptography
https://en.wikipedia.org/wiki/Iproute2
iproute2 is a collection of userspace utilities for controlling and monitoring various aspects of networking in the Linux kernel, including routing, network interfaces, tunnels, traffic control, and network-related device drivers. iproute2 is an open-source project released under the terms of version 2 of the GNU General Public License. Its development is closely tied to the development of networking components of the Linux kernel. , iproute2 is maintained by Stephen Hemminger and David Ahern. The original author, Alexey Kuznetsov, was responsible for the quality of service (QoS) implementation in the Linux kernel. iproute2 collection contains the following command-line utilities: arpd, bridge, ctstat, dcb, devlink, ip, lnstat, nstat, rdma, routef, routel, rtacct, rtmon, rtstat, ss, tc, tipc and vdpa. tc is used for traffic control. iproute2 utilities communicate with the Linux kernel using the netlink protocol. Some of the iproute2 utilities are often recommended over now-obsolete net-tools utilities that provide the same functionality. Below is a table of obsolete utilities and their iproute2 replacements. See also BusyBox ethtool TIPC
https://en.wikipedia.org/wiki/Public%20Population%20Project%20in%20Genomics
P3G (Public Population Project in Genomics and Society) is a not-for-profit international consortium dedicated to facilitating collaboration between researchers and biobanks working in the area of human population genomics. P3G is member-based and composed of experts from the different disciplines in the areas of and related to genomics, including epidemiology, law, ethics, technology, biomolecular science, etc. P3G and its members are committed to a philosophy of information sharing with the goal of supporting researchers working in areas that will improve the health of people around the world. The Organization P3G is a not-for-profit organization with members from over 40 countries. Membership falls under two different categories: Institutional and Individual. Institutional members have the right to elect and vote the board of directors of P³G, and all members are eligible for office. P3G is headquartered in the McGill University Genome Quebec Innovation Centre, Montreal (Canada). Scientific Activities Online P3G works with its members and other experts to develop tools, methods and resources designed to optimize and harmonize the infrastructures of biobanks and research projects in the areas of population genomics, epidemiology and the environment. The P3G site is completely free and accessible, and all documents, web sites and tools included on the site are non-commercial and open source. Such tools include: Toolkit - provides the epidemiological, ethical, statistical and IT instruments necessary to the access and use of data. The aim of this platform is to create a one-stop location and open access environment, where key documents are accessible to the research community. The TOOLKIT currently contains over 80 tools across five categories (Epidemiology and Biostatistics, Sample collection and processing, Data collection and processing, GE3LS, other tools). Lifespan - an open access web platform offering users a step-by-step approach for the development
https://en.wikipedia.org/wiki/Life%20on%20Titan
Whether there is life on Titan, the largest moon of Saturn, is currently an open question and a topic of scientific assessment and research. Titan is far colder than Earth, but of all the places in the Solar System, Titan is the only place besides Earth known to have liquids in the form of rivers, lakes, and seas on its surface. Its thick atmosphere is chemically active and rich in carbon compounds. On the surface there are small and large bodies of both liquid methane and ethane, and it is likely that there is a layer of liquid water under its ice shell. Some scientists speculate that these liquid mixes may provide prebiotic chemistry for living cells different from those on Earth. In June 2010, scientists analyzing data from the Cassini–Huygens mission reported anomalies in the atmosphere near the surface which could be consistent with the presence of methane-producing organisms, but may alternatively be due to non-living chemical or meteorological processes. The Cassini–Huygens mission was not equipped to look directly for micro-organisms or to provide a thorough inventory of complex organic compounds. Chemistry Titan's consideration as an environment for the study of prebiotic chemistry or potentially exotic life stems in large part due to the diversity of the organic chemistry that occurs in its atmosphere, driven by photochemical reactions in its outer layers. The following chemicals have been detected in Titan's upper atmosphere by Cassinis mass spectrometer: As mass spectrometry identifies the atomic mass of a compound but not its structure, additional research is required to identify the exact compound that has been detected. Where the compounds have been identified in the literature, their chemical formula has been replaced by their name above. The figures in Magee (2009) involve corrections for high pressure background. Other compounds believed to be indicated by the data and associated models include ammonia, polyynes, amines, ethylenimine, deuter
https://en.wikipedia.org/wiki/Slotket
In computer hardware terminology, slotkets, also known as slockets, (both short for slot to socket adapter) are adapters that allow socket-based microprocessors to be used on slot-based motherboards. Slotkets were first created to allow the use of Socket 8 Pentium Pro processors on Slot 1 motherboards. Later, they became more popular for inserting Socket 370 Intel Celerons into Slot 1 based motherboards. This lowered costs for computer builders, especially with dual processor machines. High-end motherboards accepting two Slot 1 processors (usually Pentium 2) were widely available, but double-socketed motherboards for the less expensive Socket 370 Celerons were not. The slotkets remained popular in the transition period from Slot to Socket-based Pentium III processors by allowing CPU upgrades in existing Slot 1 motherboards. Slotkets were never introduced to take advantage of the AMD Athlon processors' transition from the Slot A form factor to the Socket A form factor. Adapters that go the other way around (from socket-based motherboards to slot-based CPUs) have never been introduced, because Socket 8 based motherboards do not support the higher clock frequencies of Slot 1 based processors. Today, slotkets have largely disappeared, as Intel and AMD have not manufactured CPUs in slot form factors since 1999. See also CPU socket External links How to Install a Slocket CPU sockets
https://en.wikipedia.org/wiki/GPRS%20Tunnelling%20Protocol
GPRS Tunnelling Protocol (GTP) is a group of IP-based communications protocols used to carry general packet radio service (GPRS) within GSM, UMTS, LTE and 5G NR radio networks. In 3GPP architectures, GTP and Proxy Mobile IPv6 based interfaces are specified on various interface points. GTP can be decomposed into separate protocols, GTP-C, GTP-U and GTP'. GTP-C is used within the GPRS core network for signaling between gateway GPRS support nodes (GGSN) and serving GPRS support nodes (SGSN). This allows the SGSN to activate a session on a user's behalf (PDP context activation), to deactivate the same session, to adjust quality of service parameters, or to update a session for a subscriber who has just arrived from another SGSN. GTP-U is used for carrying user data within the GPRS core network and between the radio access network and the core network. The user data transported can be packets in any of IPv4, IPv6, or PPP formats. GTP' (GTP prime) uses the same message structure as GTP-C and GTP-U, but has an independent function. It can be used for carrying charging data from the charging data function (CDF) of the GSM or UMTS network to the charging gateway function (CGF). In most cases, this should mean from many individual network elements such as the GGSNs to a centralized computer that delivers the charging data more conveniently to the network operator's billing center. Different GTP variants are implemented by RNCs, SGSNs, GGSNs and CGFs within 3GPP networks. GPRS mobile stations (MSs) are connected to a SGSN without being aware of GTP. GTP can be used with UDP or TCP. UDP is either recommended or mandatory, except for tunnelling X.25 in version 0. GTP version 1 is used only on UDP. General features All variants of GTP have certain features in common. The structure of the messages is the same, with a GTP header following the UDP/TCP header. Header GTP version 1 GTPv1 headers contain the following fields: Version It is a 3-bit field. For GTPv1, this
https://en.wikipedia.org/wiki/Maximum%20entropy%20probability%20distribution
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class (usually defined in terms of specified properties or measures), then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time. Definition of entropy and differential entropy If is a discrete random variable with distribution given by then the entropy of is defined as If is a continuous random variable with probability density , then the differential entropy of is defined as The quantity is understood to be zero whenever . This is a special case of more general forms described in the articles Entropy (information theory), Principle of maximum entropy, and differential entropy. In connection with maximum entropy distributions, this is the only one needed, because maximizing will also maximize the more general forms. The base of the logarithm is not important as long as the same one is used consistently: change of base merely results in a rescaling of the entropy. Information theorists may prefer to use base 2 in order to express the entropy in bits; mathematicians and physicists will often prefer the natural logarithm, resulting in a unit of nats for the entropy. The choice of the measure is however crucial in determining the entropy and the resulting maximum entropy distribution, even though the usual recourse to the Lebesgue measure is often defended as "natural". Distributions with measured constants Many statistical distributions of applicable interest are those for which the
https://en.wikipedia.org/wiki/Q-slope
The Q-slope method for rock slope engineering and rock mass classification is developed by Barton and Bar. It expresses the quality of the rock mass for slope stability using the Q-slope value, from which long-term stable, reinforcement-free slope angles can be derived. The Q-slope value can be determined with: Q-slope utilizes similar parameters to the Q-system which has been used for over 40 years in the design of ground support for tunnels and underground excavations. The first four parameters, RQD (rock quality designation), Jn (joint set number), Jr (joint roughness number) and Ja (joint alteration number) are the same as in the Q-system. However, the frictional resistance pair Jr and Ja can apply, when needed, to individual sides of a potentially unstable wedges. Simply applied orientation factors (0), like (Jr/Ja)1x0.7 for set J1 and (Jr/Ja)2x0.9 for set J2, provide estimates of overall whole-wedge frictional resistance reduction, if appropriate. The Q-system term Jw is replaced with Jwice, and takes into account a wider range of environmental conditions appropriate to rock slopes, which are exposed to the environment indefinitely. The conditions include the extremes of erosive intense rainfall, ice wedging, as may seasonally occur at opposite ends of the rock-type and regional spectrum. There are also slope-relevant SRF (strength reduction factor) categories. Multiplication of these terms results in the Q-slope value, which can range between 0.001 (exceptionally poor) to 1000 (exceptionally good) for different rock masses. A simple formula for the steepest slope angle (β), in degrees, not requiring reinforcement or support is given by: Q-slope is intended for use in reinforcement-free site access road cuts, roads or railway cuttings, or individual benches in open cast mines. It is based on over 500 case studies in slopes ranging from 35 to 90 degrees in fresh hard rock slopes as well as weak, weathered and saprolitic rock slopes. Q-slope has also been a
https://en.wikipedia.org/wiki/Rigidity%20theory%20%28physics%29
Rigidity theory, or topological constraint theory, is a tool for predicting properties of complex networks (such as glasses) based on their composition. It was introduced by James Charles Phillips in 1979 and 1981, and refined by Michael Thorpe in 1983. Inspired by the study of the stability of mechanical trusses as pioneered by James Clerk Maxwell, and by the seminal work on glass structure done by William Houlder Zachariasen, this theory reduces complex molecular networks to nodes (atoms, molecules, proteins, etc.) constrained by rods (chemical constraints), thus filtering out microscopic details that ultimately don't affect macroscopic properties. An equivalent theory was developed by P.K. Gupta A.R. Cooper in 1990, where rather than nodes representing atoms, they represented unit polytopes. An example of this would be the SiO tetrahedra in pure glassy silica. This style of analysis has applications in biology and chemistry, such as understanding adaptability in protein-protein interaction networks. Rigidity theory applied to the molecular networks arising from phenotypical expression of certain diseases may provide insights regarding their structure and function. In molecular networks, atoms can be constrained by radial 2-body bond-stretching constraints, which keep interatomic distances fixed, and angular 3-body bond-bending constraints, which keep angles fixed around their average values. As stated by Maxwell's criterion, a mechanical truss is isostatic when the number of constraints equals the number of degrees of freedom of the nodes. In this case, the truss is optimally constrained, being rigid but free of stress. This criterion has been applied by Phillips to molecular networks, which are called flexible, stressed-rigid or isostatic when the number of constraints per atoms is respectively lower, higher or equal to 3, the number of degrees of freedom per atom in a three-dimensional system. The same condition applies to random packing of spheres, which a
https://en.wikipedia.org/wiki/Mode%20setting
Mode setting is a software operation that activates a display mode (screen resolution, color depth, and refresh rate) for a computer's display controller by using VESA BIOS Extensions or UEFI Graphics extensions (on more modern computers). The display mode is set by the kernel. In user-space mode-setting (UMS), the display mode is set by a user-space process. Kernel mode-setting is more flexible and allows displaying of an error in the case of a fatal system error in the kernel, even when using a user-space display server. User-space mode setting would require superuser privileges for direct hardware access, so kernel-based mode setting shuns such requirement for the user-space graphics server. Implementation Microsoft Windows Microsoft Windows versions that are NT-based use kernel mode setting. The kernel error display made possible by kernel mode setting is officially called "bug check", but more commonly known as the Blue Screen of Death. Linux The Linux kernel got the prerequisite for kernel-based mode setting by accepting Intel GEM in version 2.6.28, released in December 2008. This will be replaced by Tungstens Graphics TTM (Translation Table Maps) memory manager which supports the GEM API. TTM was developed for the free and open-source drivers for Radeon and S3 Graphics graphic chipsets (see Free and open-source graphics device driver). Support for Intel GMA graphic chipsets was accepted in version 2.6.29, released on March 23, 2009. Support for pre-R600 ATI Radeon graphics cards was accepted in version 2.6.31, released on September 9, 2009. Support for R600 and R700 was in development within DRM and was merged in version 2.6.32. Support for Evergreen (R800) was merged in version 2.6.34. As Nvidia did not release all the needed documentation for its graphics chip, development proceeded under the nouveau project, which uses reverse engineering to build a working open-source driver for Nvidia cards. Nouveau was accepted in version 2.6.33 of the kerne
https://en.wikipedia.org/wiki/Elanor%20Huntington
Elanor H. Huntington is Executive Director of Digital, National Facilities & Collections at the Commonwealth Scientific and Industrial Research Organisation and a Professor of Quantum Cybernetics at the Australian National University. She led a research program in the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology. Early life and education Huntington studied physics at the Australian National University and graduated in 1996 with a University Medal. She decided that she enjoyed using science to help others, and switched to engineering. She earned her PhD in 1999 working in experimental quantum optics. Huntington joined the Australian Defence Science and Technology Organisation after graduating, where she worked for 18 months before joining the University of New South Wales Canberra at the Australian Defence Force Academy. Research Huntington specialises in high speed measurements and the generation of non-classical states. She works on quantum computation, creating optical microchips that can detect, generate and manipulate states of light. She is interested in the intersection of quantum theory and applications. She joined the University of New South Wales in 2000. She has worked in the School of Engineering and Information Technology at the Australian Defence Force Academy at University of New South Wales, where she was made Head of the School of Engineering and IT in 2010. She leads a research program in the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology. In 2011, Huntington and collaborators made a major breakthrough in quantum computation, by demonstrating that it was possible to teleport quantum non-Gaussian beams of light on a quantum superposition. These days, she makes use of waveguide technology, coupled with systems engineering, to design and build quantum technologies. She was appointed Dean of the Australian National University College of
https://en.wikipedia.org/wiki/Australo-Melanesian
Australo-Melanesians (also known as Australasians or the Australomelanesoid, Australoid or Australioid race) is an outdated historical grouping of various people indigenous to Melanesia and Australia. Controversially, some groups found in parts of Southeast Asia and South Asia were also sometimes included. While most authors included Papuans, Aboriginal Australians and Melanesians (mainly from Fiji, New Caledonia, Solomon Islands and Vanuatu), there was controversy about the inclusion of the various Southeast Asian populations grouped as "Negrito", or a number of dark-skinned tribal populations of the Indian subcontinent. The concept of dividing humankind into three, four or five races (often called Caucasoid, Mongoloid, Negroid, and Australoid) was introduced in the 18th century and further developed by Western scholars in the context of "racist ideologies" during the age of colonialism. With the rise of modern genetics, the concept of distinct human races in a biological sense has become obsolete. In 2019, the American Association of Biological Anthropologists stated: "The belief in “races” as natural aspects of human biology, and the structures of inequality (racism) that emerge from such beliefs, are among the most damaging elements in the human experience both today and in the past." Terminological history The term "Australoid" was coined in ethnology in the mid 19th century, describing tribes or populations "of the type of native Australians". The term "Australioid race" was introduced by Thomas Huxley in 1870 to refer to certain peoples indigenous to South and Southeast Asia and Oceania. In physical anthropology, Australoid is used for morphological features characteristic of Aboriginal Australians by Daniel John Cunningham in his Text-book of Anatomy (1902). An Australioid (sic, with an additional -i-) racial group was first proposed by Thomas Huxley in an essay On the Geographical Distribution of the Chief Modifications of Mankind (1870), in which he
https://en.wikipedia.org/wiki/Garmin%20Forerunner
The Garmin Forerunner series is a selection of sports watches produced by Garmin. Most models use the Global Positioning System (GPS), and are targeted at road runners and triathletes. Forerunner series watches are designed to measure distance, speed, heart rate (optional), time, altitude, and pace. Models The Forerunner series consists of the 101, 201, 301, 205, 305, 50, 405, 60, 405CX, 310XT, 110, 210, 410, 610, 910XT, 70, 10, 220, 620, 15, 920XT, 225, 25, 230, 235, 630, 735XT, 35, 935, 30, 645, 645 Music, 45, 45S, 245, 245 Music, 945, 745, 55, 945 LTE, 255, 255 Music, 955, 955 Solar, 265, 965 (listed in chronological order by release date). All models except the 101 include a way to upload training data to a personal computer and training software. Garmin registered the name "Forerunner" with the United States Patent and Trademark Office in August 2001 but released the first watches—the 101, 201, and 301—in 2003. In 2006, the improved 205 and 305 appeared. These models are smaller than the first generation and feature a more sensitive SiRFstarIII GPS receiver chip. In late 2007, the Forerunner 50 was introduced. As opposed to GPS, this model paired with a foot pod to measure displacement. The Forerunner 50 also came packaged with a USB stick that allowed training data to be transferred wirelessly to one's pc. This feature has since become a staple of Garmin's more full-featured sport watches. The Forerunner 405 was introduced in 2008 and is significantly smaller than its predecessors, only slightly outsizing a typical wristwatch. The 405 also featured improved satellite discovery and connection. In 2009, Garmin produced three new models: the Forerunner 60 (an evolution of the Forerunner 50), the Forerunner 405CX (405 chassis), and the Forerunner 310XT (an evolution of the 305 chassis). New features included additional battery life and vibration alerts on the 310XT and advanced calorie consumption modelling on all watches. The new calorie consumption modell
https://en.wikipedia.org/wiki/Labile%20cell
In cellular biology, labile cells are cells that continuously multiply and divide throughout life . Labile cells replace the cells that are lost from the body. When injured, labile cells are repaired rapidly due to an aggressive TR response. This continual division of labile cells allows them to reproduce new stem cells and replace functional cells that are lost in the body. Functional cells may be lost through necrosis, which is the premature death of cells caused by environmental disturbances, such as diseases or injuries. Functional cells may also need to be replaced after undergoing apoptosis, which is the programmed death of cells that occurs normally as part of an organism's development. Labile cells continually regenerate by undergoing mitosis and are one of three types of cells that are involved in cell division, classified by their regenerative capacity. The other two cell types include stable cells and permanent cells. Each of these three cell types respond to injuries to their corresponding tissues differently. Stable cells, unlike labile cells, are typically not dividing and only do so when an injury occurs. Permanent cells are not capable of division after maturing. Some examples of labile cells, which act as stem cells, include skin cells, such as the epidermis , the epithelia of ducts, hematopoietic stem cells, cells within the gastrointestinal tract, and some cells found within bone marrow. Labile cells exhibit a very short G1 phase and never enter G0 phase (the resting phase), as they are continually proliferating throughout their life. Hazards Cells that are constantly dividing have a higher risk of dividing uncontrollably and becoming malignant, or cancerous. Muscle tissue does not consist of constantly dividing cells, which is likely why cancer of the muscle is not nearly as common as, for example, cancer of the skin. In addition, cytotoxic drugs used in chemotherapy target dividing cells and inhibit their proliferation. The cytotoxic dru
https://en.wikipedia.org/wiki/Digital%20phenotyping
Digital phenotyping is a multidisciplinary field of science, first defined in a May 2016 paper in JMIR Mental Health authored by John Torous, Mathew V Kiang, Jeanette Lorme, and Jukka-Pekka Onnela as the "moment-by-moment quantification of the individual-level human phenotype in situ using data from personal digital devices." The data can be divided into two subgroups, called active data and passive data, where the former refers to data that requires active input from the users to be generated, whereas passive data, such as sensor data and phone usage patterns, are collected without requiring any active participation from the user. Smartphones are well suited for digital phenotyping given their widespread adoption and ownership, the extent to which users engage with the devices, and richness of data that may be collected from them. Smartphone data can be used to study behavioral patterns, social interactions, physical mobility, gross motor activity, and speech production, among others. Smartphone ownership has been in steady rise globally over the past few years. For example, in the U.S., smartphone ownership among adults increased from 35% in 2011 to 64% in 2015, and in 2017 an estimated 95% of Americans own a cellphone of some kind and 77% own a smartphone. The use of passive data collection from smartphone devices can provide granular information relevant to psychiatric, aging, frailty, and other illness phenotypes. Types of relevant passive data include GPS data to monitor spatial location, accelerometer data to record movement and gross motor activity, and call and messaging logs to document social engagement with others. Passively collected data may also support clinical differentiation between diagnostic groups and monitoring mental health symptoms. The related term 'digital phenotype,' was introduced in Nature Biotechnology by Sachin H. Jain and John Brownstein in 2015. Research Platforms and Commercialization One of the first implementations of digit
https://en.wikipedia.org/wiki/Zinc%20finger%20protein%20674
Zinc finger protein 674 is a protein that in humans is encoded by the ZNF674 gene. Function This gene encodes a zinc finger protein with an N-terminal Kruppel-associated box-containing (KRAB) domain and 11 Kruppel-type C2H2 zinc finger domains. Like other zinc finger proteins, this gene may function as a transcription factor. This gene resides on an area of chromosome X that has been implicated in nonsyndromic X-linked mental retardation. Alternative splicing results in multiple transcript variants encoding different isoforms.
https://en.wikipedia.org/wiki/Construction%20of%20an%20irreducible%20Markov%20chain%20in%20the%20Ising%20model
Construction of an irreducible Markov chain in the Ising model is a mathematical method to prove results. In applied mathematics, the construction of an irreducible Markov Chain in the Ising model is the first step in overcoming a computational obstruction encountered when a Markov chain Monte Carlo method is used to get an exact goodness-of-fit test for the finite Ising model. The Ising model is used to study magnetic phase transitions and is one of the models of interacting systems. Markov bases Every integer vector , can be uniquely written as , where and are nonnegative vectors. A Markov basis for the Ising model is a set of integer vectors such that: (i) For all , there must be and . (ii) For any and any , there always exist satisfy and for l = 1,...,k. The element of is moved. Then, by using the Metropolis–Hastings algorithm, we can get an aperiodic, reversible, and irreducible Markov Chain. The paper ‘Algebraic algorithms for sampling from conditional distributions,’ published by Persi Diaconis and Bernd Sturmfels in 1998, shows that a Markov basis can be defined algebraically as an Ising model. By the paper, any generating set for the ideal is a Markov basis for the Ising model. Construction of an irreducible Markov chain Without modifying the algorithm mentioned in the paper, it is impossible to get uniform samples from , otherwise leading to inaccurate p-values. A simple swap is defined as of the form , where is the canonical basis vector of . Simple swaps change the states of two lattice points in y. Z denotes the set of sample swaps. Then two configurations are -connected by Z, if there is a path between and in consisting of simple swaps , which means there exists such that with for l = 1,...,k The algorithm can now be described as: (i) Start with the Markov chain in a configuration (ii) Select uniformly at random and let . (iii) Accept if ; otherwise remain in y. Although the resulting Markov Chain pos
https://en.wikipedia.org/wiki/Mastitis
Mastitis is inflammation of the breast or udder, usually associated with breastfeeding. Symptoms typically include local pain and redness. There is often an associated fever and general soreness. Onset is typically fairly rapid and usually occurs within the first few months of delivery. Complications can include abscess formation. Risk factors include poor latch, cracked nipples, use of a breast pump, and weaning. The bacteria most commonly involved are Staphylococcus and Streptococci. Diagnosis is typically based on symptoms. Ultrasound may be useful for detecting a potential abscess. Prevention is by proper breastfeeding techniques. When infection is present, antibiotics such as cephalexin may be recommended. Breastfeeding should typically be continued, as emptying the breast is important for healing. Tentative evidence supports benefits from probiotics. About 10% of breastfeeding women are affected. Types When it occurs in breastfeeding mothers, it is known as puerperal mastitis, lactation mastitis, or lactational mastitis. When it occurs in non breastfeeding women it is known as non-puerperal or nonlactational. Mastitis can, in rare cases, occur in men. Inflammatory breast cancer has symptoms very similar to mastitis and must be ruled out. The symptoms are similar for puerperal and nonpuerperal mastitis but predisposing factors and treatment can be very different. Pregnancy related Puerperal mastitis is the inflammation of the breast in connection with pregnancy, breastfeeding or weaning. Since one of the most prominent symptoms is tension and engorgement of the breast, it is thought to be caused by blocked milk ducts or milk excess. It is relatively common; estimates range depending on methodology between 5–33%. However, only about 0.4–0.5% of breastfeeding mothers develop an abscess. Some predisposing factors are known but their predictive value is minimal. It appears that proper breastfeeding technique, frequent breastfeeding and avoidance of stress ar
https://en.wikipedia.org/wiki/Vietnamese%20Quoted-Readable
Vietnamese Quoted-Readable (usually abbreviated VIQR), also known as Vietnet, is a convention for writing Vietnamese using ASCII characters encoded in only 7 bits, making possible for Vietnamese to be supported in computing and communication systems at the time. Because the Vietnamese alphabet contains a complex system of diacritical marks, VIQR requires the user to type in a base letter, followed by one or two characters that represent the diacritical marks. Syntax VIQR uses the following convention: VIQR uses DD or Dd for the Vietnamese letter Đ, and dd for the Vietnamese letter đ. To type certain punctuation marks (namely, the period, question mark, apostrophe, forward slash, opening parenthesis, or tilde) directly after most Vietnamese words, a backslash (\) must be typed directly before the punctuation mark, functioning as an escape character, so that it will not be interpreted as a diacritical mark. For example: What is your name [Sir]? My name is Trần Văn Hiếu. Software support VIQR is primarily used as a Vietnamese input method in software that supports Unicode. Similar input methods include Telex and VNI. Input method editors such as VPSKeys convert VIQR sequences to Unicode precomposed characters as one types, typically allowing modifier keys to be input after all the base letters of each word. However, in the absence of input method software or Unicode support, VIQR can still be input using a standard keyboard and read as plain ASCII text without suffering from mojibake. Unlike the VISCII and VPS code pages, VIQR is rarely used as a character encoding. While VIQR is registered with the Internet Assigned Numbers Authority as a MIME charset, MIME-compliant software is not required to support it. Nevertheless, the Mozilla Vietnamese Enabling Project once produced builds of the open source version of Netscape Communicator, as well as its successor, the Mozilla Application Suite, that were capable of decoding VIQR-encoded webpages, e-mails, and ne
https://en.wikipedia.org/wiki/List%20of%20restriction%20enzyme%20cutting%20sites%3A%20A
This article contains a list of restriction enzymes whose names start with A and have a clearly defined cutting site. The following information is given for each enzyme: Name of Restriction Enzyme: Accepted name of the molecule, according to the internationally adopted nomenclature, and bibliographical references. Note: When alphabetizing, enzymes are first ordered alphabetically by the acronyms (everything before the roman numeral); then enzymes of a given acronym are ordered alphabetically by the roman numeral, treating the numeral as a number and not a string of letters. This helps keep the entries ordered hierarchically while also alphabetic.(Further reading: see the section "Nomenclature" in the article "Restriction enzyme".) PDB code: Code used to identify the structure of a protein in the PDB database of protein structures. The 3D atomic structure of a protein provides highly valuable information to understand the intimate details of its mechanism of action. REBASE Number: Number used to identify restriction enzymes in the REBASE restriction enzyme database. This database includes important information about the enzyme such as Recognition sequence, source, and Isoschizomers, as well as other data, such as the commercial suppliers of the enzyme. Source: Organism that naturally produces the enzyme. Recognition sequence: Sequence of DNA recognized by the enzyme and to which it specifically binds. Cut: Displays the cut site and pattern and products of the cut. The recognition sequence and the cut site usually match, but sometimes the cut site can be dozens of nucleotides away from the recognition site. Isoschizomers and neoschizomers: An isoschizomer is a restriction enzyme that recognizes the same sequence as another. A neoschizomer is a special type of isoschizomer that recognizes the same sequence as another, but cuts in a different manner. A maximum number of 8–10 most common isoschizomers are indicated for every enzyme but there may be many more. Ne
https://en.wikipedia.org/wiki/Ginga%20%28middleware%29
Ginga is the middleware specification for the Nipo-Brazilian Digital Television System (SBTVD, from the Portuguese Sistema Brasileiro de Televisão Digital). Ginga is also ITU-T Recommendation for IPTV Services. It is also considered in ITU-T recommendations for Cable Broadcast services (ITU-T J.200 Recommendation series: Rec. ITU-T J.200, Rec. ITU-T J.201 and Rec. ITU-T J.202) and for Terrestrial Broadcast services by ITU-R BT.1889, ITU-R BT.1699 and ITU-R BT.1722. Ginga was developed based on a set of standardized technologies but mainly on innovations developed by Brazilian researchers. Its current reference implementation was released under the GPL license. Ginga is divided into two main integrated subsystems, which allow the development of applications following two different programming paradigms. Those subsystems are called Ginga-NCL (for declarative NCL applications) and Ginga-J (for imperative Java applications). In the case of the Brazilian Terrestrial Digital TV System, and any other Digital TV Systems following the definitions in the ABNT standards for the Ginga Middleware ABNT 15606, Ginga-J is required to be supported in fixed receivers and it is optional in portable receivers. For IPTV services following the H.761 ITU-T Recommendation, only the Ginga-NCL subsystem is required, for any terminal type. Development Ginga was developed by Telemídia Lab from Pontifical Catholic University of Rio de Janeiro (PUC-Rio) and by LAViD from Federal University of Paraíba (UFPB). See also Nested Context Language Broadcast Markup Language
https://en.wikipedia.org/wiki/Active%20redundancy
Active redundancy is a design concept that increases operational availability and that reduces operating cost by automating most critical maintenance actions. This concept is related to condition-based maintenance and fault reporting. History The initial requirement began with military combat systems during World War I. The approach used for survivability was to install thick armor plate to resist gun fire and install multiple guns. This became unaffordable and impractical during the Cold War when aircraft and missile systems became common. The new approach was to build distributed systems that continue to work when components are damaged. This depends upon very crude forms of artificial intelligence that perform reconfiguration by obeying specific rules. An example of this approach is the AN/UYK-43 computer. Formal design philosophies involving active redundancy are required for critical systems where corrective labor is undesirable or impractical to correct failure during normal operation. Commercial aircraft are required to have multiple redundant computing systems, hydraulic systems, and propulsion systems so that a single in-flight equipment failure will not cause loss of life. A more recent outcome of this work is the Internet, which relies on a backbone of routers that provide the ability to automatically re-route communication without human intervention when failures occur. Satellites placed into orbit around the Earth must include massive active redundancy to ensure operation will continue for a decade or longer despite failures induced by normal failure, radiation-induced failure, and thermal shock. This strategy now dominates space systems, aircraft, and missile systems. Principle Maintenance requires three actions, which usually involve down time and high priority labor costs: Automatic fault detection Automatic fault isolation Automatic reconfiguration Active redundancy eliminates down time and reduces manpower requirements by automatin
https://en.wikipedia.org/wiki/Pearl%20barley
Pearl barley, or pearled barley, is barley that has been processed to remove its fibrous outer hull and polished to remove some or all of the bran layer. It is the most common form of barley for human consumption because it cooks faster and is less chewy than other, less-processed forms of the grain such as "hulled barley" (or "barley groats", also known as "pot barley" and "Scotch barley"). Fine barley flour is prepared from milled pearl barley. Pearl barley is similar to wheat in its caloric, protein, vitamin and mineral content, though some varieties are higher in lysine. It is used mainly in soups, stews, and potages. It is the primary ingredient of the Italian dish orzotto and one of the main ingredients of the Jewish dish cholent and the Polish soup krupnik.
https://en.wikipedia.org/wiki/159th%20meridian%20west
The meridian 159° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, North America, the Pacific Ocean, the Southern Ocean, and Antarctica to the South Pole. The 159th meridian west forms a great circle with the 21st meridian east. From Pole to Pole Starting at the North Pole and heading south to the South Pole, the 159th meridian west passes through: {| class="wikitable plainrowheaders" ! scope="col" width="130" | Co-ordinates ! scope="col" | Country, territory or sea ! scope="col" | Notes |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arctic Ocean | style="background:#b0e0e6;" | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Chukchi Sea | style="background:#b0e0e6;" | |- | ! scope="row" | | Alaska — Point Franklin |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Chukchi Sea | style="background:#b0e0e6;" | Peard Bay |- | ! scope="row" | | Alaska |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Bering Sea | style="background:#b0e0e6;" | Bristol Bay |- | ! scope="row" | | Alaska — Alaska Peninsula |-valign="top" | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Pacific Ocean | style="background:#b0e0e6;" | Passing just east of Chiachi Island, Alaska, (at ) Passing just west of Mitrofania Island, Alaska, (at ) Passing just east of Simeonof Island, Alaska, (at ) Passing just east of Kauai island, Hawaii, (at ) Passing just west of Manuae atoll, (at ) |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Southern Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | Antarctica | Ross Dependency, claimed by |- |} See also 158th meridian west 160th meridian west w159 meridian west
https://en.wikipedia.org/wiki/National%20Biodiversity%20Authority
The National Biodiversity Authority (NBA) is a statutory autonomous body under the Ministry of Environment, Forests and Climate Change, Government of India established in 2003 to implement the provisions under the Biological Diversity Act, 2002, after India signed Convention on Biological Diversity (CBD) in 1992. Overview Headquartered in Chennai, India, it acts as a facilitating, regulating and advisory body to the Government of India "on issues of conservation, sustainable use of biological resources and fair and equitable sharing of benefits arising out of the use of biological resources." Additionally, it advises State Governments in identifying the areas of biodiversity importance (biodiversity hotspots) as heritage sites. In 2012, NBA organized the first ever National Biodiversity Congress (NBC), held at Thiruvananthapuram, Kerala. On this occasion, National Biodiversity Students' Congress was also held Since its establishment, NBA has supported creation of SBBs in 29 States and facilitated establishment of around 1,39,831 BMCs. The National Biodiversity Authority is mandated to regulate access to biological resources and / or associated knowledge for research, bio-survey and bio-utilization, commercial utilization, obtaining Intellectual Property Rights, transfer of results of research and transfer of accessed biological resources. The details of application forms for Access and Benefit Sharing (ABS) of specific activities are given in the website of National Biodiversity Authority. The Moot Court Association of Symbiosis Law School, Nagpur in collaboration with the Maharashtra State Biodiversity Board organized the National Moot Court Competition from December 3- 5, 2021. J. Justin Mohan, Secretary, National Biodiversity Authority Judgement NGT in its judgement of Chandrabhal singh, has directed all the states to comply with provision of the act and create biodiversity management committee and peoplebiodiversity register in comprehensive manner. See
https://en.wikipedia.org/wiki/RF%20antenna%20ion%20source
An RF antenna ion source (or radio frequency antenna ion source) is an internal multi-cusp design that can produce a particle beam of about ~30 to 40 mA current. It is used in high energy particle physics and in accelerator laboratories. Previous RF antennas would penetrate the porcelain enamel coating on the antenna section at high RF power. This problem has been corrected in the development stage with a ten layer coating of titanium dioxide, with approximately 1 mm thick coating. With the development of the RF antenna ion source, or "non-thermionic ion source," the ion source has an advantage over conventional cold cathodes and hot filament ion sources. The filament continuously burns out over time with a shorter lifespan, requiring venting of the ion source to atmosphere and rebuilding of the ion source. See also Ion source Particle accelerator External links Lawrence Berkley National Laboratory Improvement of the lifetime of radio frequency antenna Particle physics Accelerator physics