source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Skyline%20matrix
In scientific computing, skyline matrix storage, or SKS, or a variable band matrix storage, or envelope storage scheme is a form of a sparse matrix storage format matrix that reduces the storage requirement of a matrix more than banded storage. In banded storage, all entries within a fixed distance from the diagonal (called half-bandwidth) are stored. In column-oriented skyline storage, only the entries from the first nonzero entry to the last nonzero entry in each column are stored. There is also row oriented skyline storage, and, for symmetric matrices, only one triangle is usually stored. Skyline storage has become very popular in the finite element codes for structural mechanics, because the skyline is preserved by Cholesky decomposition (a method of solving systems of linear equations with a symmetric, positive-definite matrix; all fill-in falls within the skyline), and systems of equations from finite elements have a relatively small skyline. In addition, the effort of coding skyline Cholesky is about same as for Cholesky for banded matrices (available for banded matrices, e.g. in LAPACK; for a prototype skyline code, see ). Before storing a matrix in skyline format, the rows and columns are typically renumbered to reduce the size of the skyline (the number of nonzero entries stored) and to decrease the number of operations in the skyline Cholesky algorithm. The same heuristic renumbering algorithm that reduce the bandwidth are also used to reduce the skyline. The basic and one of the earliest algorithms to do that is reverse Cuthill–McKee algorithm. However, skyline storage is not as popular for very large systems (many millions of equations) because skyline Cholesky is not so easily adapted for massively parallel computing, and general sparse methods, which store only the nonzero entries of the matrix, become more efficient for very large problems due to much less fill-in. See also Sparse matrix Band matrix Frontal solver
https://en.wikipedia.org/wiki/Marks%27%20Standard%20Handbook%20for%20Mechanical%20Engineers
Marks' Standard Handbook for Mechanical Engineers is a comprehensive handbook for the field of mechanical engineering. Originally based on the even older German , it was first published in 1916 by Lionel Simeon Marks. In 2017, its 12th edition, published by McGraw-Hill, marked the 100th anniversary of the work. The handbook was translated into several languages. Lionel S. Marks was a professor of mechanical engineering at Harvard University and Massachusetts Institute of Technology in the early 1900s. Topics The 11th edition consists of 20 sections: Mathematical Tables and Measuring Units Mathematics Mechanics of Solids and Fluids Heat Strength of Materials Materials of Engineering Fuels and Furnaces Machine Elements Power Generation Materials Handling Transportation Building Construction and Equipment Manufacturing Processes Fans, Pumps, and Compressors Electrical and Electronics Engineering Instruments and Controls Industrial Engineering The Regulatory Environment Refrigeration, Cryogenics, and Optics Emerging Technologies Editions English editions: 1st edition, 1916, edited by Lionel Simeon Marks, based on the German 2nd edition, 1924, edited by Lionel Simeon Marks 3rd edition, 1930, Editor-in-Chief Lionel S. Marks, total issue 103,500, McGraw-Hill Book Co. Inc. 1941, edited by Lionel Peabody Marks 1951, edited by Lionel Peabody Marks and Alison Peabody Marks 1967, edited by Theodore Baumeister III 6th edition, 1958, edited by Eugene A. Avallone, Theodore Baumeister III 7th edition, golden (50th) anniversary, 1976?, edited by Theodore Baumeister III 8th edition, edited by Theodore Baumeister III, Eugene A. Avallone 9th edition 10th edition, 80th anniversary, 1997, edited by Eugene A. Avallone, Theodore Baumeister III, 11th edition, 90th anniversary, 2007, edited by Eugene A. Avallone, Theodore Baumeister III, Ali M. Sadegh 12th edition, 100th anniversary, 2017, edited by Ali M. Sadegh, William M. Worek, Eugene A. Avall
https://en.wikipedia.org/wiki/Homing%20endonuclease
The homing endonucleases are a collection of endonucleases encoded either as freestanding genes within introns, as fusions with host proteins, or as self-splicing inteins. They catalyze the hydrolysis of genomic DNA within the cells that synthesize them, but do so at very few, or even singular, locations. Repair of the hydrolyzed DNA by the host cell frequently results in the gene encoding the homing endonuclease having been copied into the cleavage site, hence the term 'homing' to describe the movement of these genes. Homing endonucleases can thereby transmit their genes horizontally within a host population, increasing their allele frequency at greater than Mendelian rates. Origin and mechanism Although the origin and function of homing endonucleases is still being researched, the most established hypothesis considers them as selfish genetic elements, similar to transposons, because they facilitate the perpetuation of the genetic elements that encode them independent of providing a functional attribute to the host organism. Homing endonuclease recognition sequences are long enough to occur randomly only with a very low probability (approximately once every ), and are normally found in one or very few instances per genome. Generally, owing to the homing mechanism, the gene encoding the endonuclease (the HEG, "homing endonuclease gene") is located within the recognition sequence which the enzyme cuts, thus interrupting the homing endonuclease recognition sequence and limiting DNA cutting only to sites that do not (yet) carry the HEG. Prior to transmission, one allele carries the gene (HEG+) while the other does not (HEG−), and is therefore susceptible to being cut by the enzyme. Once the enzyme is synthesized, it breaks the chromosome in the HEG− allele, initiating a response from the cellular DNA repair system. The damage is repaired using recombination, taking the pattern of the opposite, undamaged DNA allele, HEG+, that contains the gene for the endonuclease.
https://en.wikipedia.org/wiki/Medea%20gene
Medea is a gene from the fruit fly Drosophila melanogaster that was one of the first two Smad genes discovered. For both genes, the maternal effect lethality was the basis for selection of their names. Medea was named for the mythological Greek Medea, who killed her progeny fathered by Jason. Both Medea and Mothers against dpp were identified in a genetic screen for maternal effect mutations that caused lethality of heterozygous decapentaplegic progeny. Because decapentaplegic is a bone morphogenetic protein in the transforming growth factor beta superfamily, identification of the fly Smad genes provided a much needed clue to understand the signal transduction pathway for this diverse family of extracellular proteins. Humans, mice, and other vertebrates have a gene with the same function as Medea, called SMAD4. An overview of the biology of Medea is found at The Interactive Fly, and the details of Medea genetics and molecular biology are curated on FlyBase. Another laboratory used Medea as an acronym to describe a synthetic gene causing Maternal effect dominant embryonic arrest. The formal genetic designation for Maternal effect dominant embryonic arrest is P{Medea.myd88}, more details are in FlyBase.
https://en.wikipedia.org/wiki/Saccharification
In biochemistry, saccharification is a term for denoting any chemical change wherein a monosaccharide molecule remains intact after becoming unbound from another saccharide. For example, when a carbohydrate is broken into its component sugar molecules by hydrolysis (e.g., sucrose being broken down into glucose and fructose). Enzymes such as amylases (e.g. in saliva) and glycoside hydrolase (e.g. within the brush border of the small intestine) are able to perform exact saccharification through enzymatic hydrolysis. Through thermolysis, saccharification can also occur as a transient result, among many other possible effects, during caramelization. See also Glycosidic bond Glycoside hydrolase Gelation
https://en.wikipedia.org/wiki/Hydroxyethyl%20starch
Hydroxyethyl starch (HES/HAES), sold under the brand name Voluven among others, is a nonionic starch derivative, used as a volume expander in intravenous therapy. The use of HES on critically ill patients is associated with an increased risk of death and kidney problems. HES is a general term and can be sub-classified according to average molecular weight, molar substitution, concentration, C2/C6 ratio and Maximum Daily Dose. The European Medicines Agency commenced in June 2013 the process of agreeing to reduced indications which was completed in October 2013. The process of full withdrawal in the EU was expected to complete in 2018. Medical uses An intravenous solution of hydroxyethyl starch is used to prevent shock following severe blood loss caused by trauma, surgery, or other problem. It however appears to have greater risk of a poor outcome compared to other intravenous solutions and may increase the risk of death. Adverse effects HES can cause anaphylactoid reactions: hypersensitivity, mild influenza-like symptoms, slow heart rate, fast heart rate, spasms of the airways, and non-cardiogenic pulmonary edema. It is also linked to a decrease in hematocrit and disturbances in blood clotting. One liter of 6% solution (Hespan) reduces factor VIII level by 50% and will prolong the aPTT and will also decrease vWF. A coagulation effect of hetastarch administration is direct movement into fibrin clots and a dilutional effect on serum. Hetastarch may lead to platelet dysfunction by causing a reduction in the availability of glycoprotein IIb-IIIa on platelets. HES derivatives have been demonstrated to have increased rates of acute kidney failure and need for renal replacement therapy and to decrease long-term survival when used alone in cases of severe sepsis compared with Ringer lactate solution. The effects were tested on HES 130kDa/0.42 in people with severe sepsis; analysis showed increased rates of kidney failure and increased mortality when compared to LR. I
https://en.wikipedia.org/wiki/The%20Quest%20for%20Power
The Quest for Power is book on the history of engineering written by Hugh Pembroke Vowles and Margaret Winifred Vowles. It was published in 1931 by Chapman and Hall of London, England. Content The book contains over 150 illustrations and has 370 pages. It begins with a picture of Michael Faraday. The book is divided into three parts. The first is "The apprenticeship of toil", which deals with stone tools, early use of metal, control of water, early structural achievements, transport and measurement. The second is "The age of Power". This deals with steam power, internal combustion engines and electrical power. Finally, the third part is entitled "The materials of power". This looks at coal, oil, alcohol, metals and other products. The last section deals with the future. Dedication The Quest for Power is dedicated to Hubert Cecil Booth, inventor of the vacuum cleaner. The dedication reads: "In friendship's name to Hubert Cecil Booth, F.C.G.I., M. Inst. C.E. who by the invention and subsequent development of the vacuum cleaner has created a new industry, lightened the burden of human toil, and increased the health and happiness of innumberable homes". Reception The flier advertising the book contained the following quotation: "Eighty British, American and European journals - representative of all that is best in the periodical literature of Science, Engineering, Arts and Industry, besides literary "weeklies" and the daily press - have devoted over five hundred single column inches of their space to reviews of The Quest for Power. A few typical comments are quoted overleaf" Press opinions from scientific, engineering and industrial journals "No other book in the English language gives so satisfactory an account of engineering progress through the ages". Engineer Index of the American Society of Mechanical Engineers. "The book will clearly be of great permanent value as a work of reference, all the more because of its full citation of sources... a notabl
https://en.wikipedia.org/wiki/Fate%20mapping
Fate mapping is a method used in developmental biology to study the embryonic origin of various adult tissues and structures. The "fate" of each cell or group of cells is mapped onto the embryo, showing which parts of the embryo will develop into which tissue. When carried out at single-cell resolution, this process is called cell lineage tracing. It is also used to trace the development of tumors. History The earliest fate maps were based on direct observation of the embryos of ascidians or other marine invertebrates. Modern fate mapping began in 1929 when Walter Vogt marked the groups of cells using a dyed agar chip and tracked them through gastrulation. In 1978, horseradish peroxidase (HRP) was introduced as a marker. HRP was more effective than previous markers, but required embryos to be fixed before viewing. Genetic fate mapping is a technique developed in 1981 which uses a site-specific recombinase to track cell lineage genetically. Today, fate mapping is an important tool in many fields of biology research, such as developmental biology, stem cell research, and kidney research. Cell lineage Fate mapping and cell lineage are similar but distinct topics, although there is often overlap. For example, the development of the complete cell lineage of C. elegans can be described as the fate maps of each cell division stacked hierarchically.  The distinction between the topics is in the type of information included. Fate mapping shows which tissues come from which part of the embryo at a certain stage in development, whereas cell lineage shows the relationships between cells at each division. A cell lineage can be used to generate a fate map, and in cases like C. elegans, successive fate mapping is used to develop a cell lineage. Method Fate mapping is accomplished by inserting a heritable genetic mark into a cell. Typically, this is a fluorescent protein. Therefore, any progeny of the cell will have this genetic mark. It can also be done through the use of mol
https://en.wikipedia.org/wiki/Alan%20D.%20Taylor
Alan Dana Taylor (born October 27, 1947) is an American mathematician who, with Steven Brams, solved the problem of envy-free cake-cutting for an arbitrary number of people with the Brams–Taylor procedure. Taylor received his Ph.D. in 1975 from Dartmouth College. He was the Marie Louise Bailey professor of mathematics at Union College, in Schenectady, New York. He retired from the college in 2022. Selected publications Alan D. Taylor (1995) Mathematics and Politics: Strategy, Voting, Power, and Proof Springer-Verlag. and 0-387-94500-8; with Allison Pacelli: Steven J. Brams and Alan D. Taylor (1995). An Envy-Free Cake Division Protocol American Mathematical Monthly, 102, pp. 9–18. (JSTOR) Steven J. Brams and Alan D. Taylor (1996). Fair Division - From cake-cutting to dispute resolution Cambridge University Press. and Notes External links Alan Taylor - Union College Living people 20th-century American mathematicians 21st-century American mathematicians Game theorists Dartmouth College alumni Union College (New York) faculty American political scientists Fair division researchers 1947 births
https://en.wikipedia.org/wiki/Shuttle%20vector
A shuttle vector is a vector (usually a plasmid) constructed so that it can propagate in two different host species. Therefore, DNA inserted into a shuttle vector can be tested or manipulated in two different cell types. The main advantage of these vectors is they can be manipulated in E. coli, then used in a system which is more difficult or slower to use (e.g. yeast). Shuttle vectors include plasmids that can propagate in eukaryotes and prokaryotes (e.g. both Saccharomyces cerevisiae and Escherichia coli) or in different species of bacteria (e.g. both E. coli and Rhodococcus erythropolis). There are also adenovirus shuttle vectors, which can propagate in E. coli and mammals. Shuttle vectors are frequently used to quickly make multiple copies of the gene in E. coli (amplification). They can also be used for in vitro experiments and modifications (e.g. mutagenesis, PCR). One of the most common types of shuttle vectors is the yeast shuttle vector. Almost all commonly used S. cerevisiae vectors are shuttle vectors. Yeast shuttle vectors have components that allow for replication and selection in both E. coli cells and yeast cells. The E. coli component of a yeast shuttle vector includes an origin of replication and a selectable marker, e.g. antibiotic resistance, beta lactamase, beta galactosidase. The yeast component of a yeast shuttle vector includes an autonomously replicating sequence (ARS), a yeast centromere (CEN), and a yeast selectable marker (e.g. URA3, a gene that encodes an enzyme for uracil synthesis, Lodish et al. 2007).
https://en.wikipedia.org/wiki/Nystr%C3%B6m%20method
In mathematics numerical analysis, the Nyström method or quadrature method seeks the numerical solution of an integral equation by replacing the integral with a representative weighted sum. The continuous problem is broken into discrete intervals; quadrature or numerical integration determines the weights and locations of representative points for the integral. The problem becomes a system of linear equations with equations and unknowns, and the underlying function is implicitly represented by an interpolation using the chosen quadrature rule. This discrete problem may be ill-conditioned, depending on the original problem and the chosen quadrature rule. Since the linear equations require operations to solve, high-order quadrature rules perform better because low-order quadrature rules require large for a given accuracy. Gaussian quadrature is normally a good choice for smooth, non-singular problems. Discretization of the integral Standard quadrature methods seek to represent an integral as a weighed sum in the following manner: where are the weights of the quadrature rule, and points are the abscissas. Example Applying this to the inhomogeneous Fredholm equation of the second kind , results in . See also Boundary element method
https://en.wikipedia.org/wiki/Nahm%20equations
In differential geometry and gauge theory, the Nahm equations are a system of ordinary differential equations introduced by Werner Nahm in the context of the Nahm transform – an alternative to Ward's twistor construction of monopoles. The Nahm equations are formally analogous to the algebraic equations in the ADHM construction of instantons, where finite order matrices are replaced by differential operators. Deep study of the Nahm equations was carried out by Nigel Hitchin and Simon Donaldson. Conceptually, the equations arise in the process of infinite-dimensional hyperkähler reduction. They can also be viewed as a dimensional reduction of the anti-self-dual Yang-Mills equations . Among their many applications we can mention: Hitchin's construction of monopoles, where this approach is critical for establishing nonsingularity of monopole solutions; Donaldson's description of the moduli space of monopoles; and the existence of hyperkähler structure on coadjoint orbits of complex semisimple Lie groups, proved by , , and . Equations Let be three matrix-valued meromorphic functions of a complex variable . The Nahm equations are a system of matrix differential equations together with certain analyticity properties, reality conditions, and boundary conditions. The three equations can be written concisely using the Levi-Civita symbol, in the form More generally, instead of considering by matrices, one can consider Nahm's equations with values in a Lie algebra . Additional conditions The variable is restricted to the open interval , and the following conditions are imposed: can be continued to a meromorphic function of in a neighborhood of the closed interval , analytic outside of and , and with simple poles at and ; and At the poles, the residues of form an irreducible representation of the group SU(2). Nahm–Hitchin description of monopoles There is a natural equivalence between the monopoles of charge for the group , modulo gauge transfor
https://en.wikipedia.org/wiki/Inverse%20problem%20for%20Lagrangian%20mechanics
In mathematics, the inverse problem for Lagrangian mechanics is the problem of determining whether a given system of ordinary differential equations can arise as the Euler–Lagrange equations for some Lagrangian function. There has been a great deal of activity in the study of this problem since the early 20th century. A notable advance in this field was a 1941 paper by the American mathematician Jesse Douglas, in which he provided necessary and sufficient conditions for the problem to have a solution; these conditions are now known as the Helmholtz conditions, after the German physicist Hermann von Helmholtz. Background and statement of the problem The usual set-up of Lagrangian mechanics on n-dimensional Euclidean space Rn is as follows. Consider a differentiable path u : [0, T] → Rn. The action of the path u, denoted S(u), is given by where L is a function of time, position and velocity known as the Lagrangian. The principle of least action states that, given an initial state x0 and a final state x1 in Rn, the trajectory that the system determined by L will actually follow must be a minimizer of the action functional S satisfying the boundary conditions u(0) = x0, u(T) = x1. Furthermore, the critical points (and hence minimizers) of S must satisfy the Euler–Lagrange equations for S: where the upper indices i denote the components of u = (u1, ..., un). In the classical case the Euler–Lagrange equations are the second-order ordinary differential equations better known as Newton's laws of motion: The inverse problem of Lagrangian mechanics is as follows: given a system of second-order ordinary differential equations that holds for times 0 ≤ t ≤ T, does there exist a Lagrangian L : [0, T] × Rn × Rn → R for which these ordinary differential equations (E) are the Euler–Lagrange equations? In general, this problem is posed not on Euclidean space Rn, but on an n-dimensional manifold M, and the Lagrangian is a function L : [0, T] × TM → R, where TM denotes the tan
https://en.wikipedia.org/wiki/Mendelian%20randomization
In epidemiology, Mendelian randomization (commonly abbreviated to MR) is a method using measured variation in genes to interrogate the causal effect of an exposure on an outcome. Under key assumptions (see below), the design reduces both reverse causation and confounding, which often substantially impede or mislead the interpretation of results from epidemiological studies. The study design was first proposed in 1986 and subsequently described by Gray and Wheatley as a method for obtaining unbiased estimates of the effects of a putative causal variable without conducting a traditional randomized controlled trial (i.e. the "gold standard" in epidemiology for establishing causality). These authors also coined the term Mendelian randomization. Motivation One of the predominant aims of epidemiology is to identify modifiable causes of health outcomes and disease especially those of public health concern. In order to ascertain whether modifying a particular trait (e.g. via an intervention, treatment or policy change) will convey a beneficial effect within a population, firm evidence that this trait causes the outcome of interest is required. However, many observational epidemiological study designs are limited in the ability to discern correlation from causation - specifically whether a particular trait causes an outcome of interest, is simply related to that outcome (but does not cause it) or is a consequence of the outcome itself. Only the former will be beneficial within a public health setting where the aim is to modify that trait to reduce the burden of disease. There are many epidemiological study designs that aim to understand relationships between traits within a population sample, each with shared and unique advantages and limitations in terms of providing causal evidence, with the "gold standard" being randomized controlled trials. Well-known successful demonstrations of causal evidence consistent across multiple studies with different designs include the id
https://en.wikipedia.org/wiki/Ministry%20of%20Food%20Processing%20Industries
The Ministry of Food Processing Industries (MOFPI) is a ministry of the Government of India responsible for formulation and administration of the rules and regulations and laws relating to food processing in India. The ministry was set up in the year 1988, with a view to develop a strong and vibrant food processing industry, to create increased employment in rural sector and enable farmers to reap the benefits of modern technology and to create a surplus for exports and stimulating demand for processed food. The ministry is currently headed by Pashupati Kumar Paras, a Cabinet Minister. List of ministers List of ministers of state Functions of the ministry Policy support and developmental Promotional and technical Advisory and regulatory Goals of MOFPI Better utilization and value addition of agricultural produce for enhancement of income of farmers. Minimizing wastage at all stages in the food processing chain by the development of infrastructure for storage, transportation and processing of agro-food produce. Induction of modern technology into the food processing industries from both domestic and external sources. Maximum utilization of agricultural residues and by-products of the primary agricultural produce as also of the processed industry. To encourage R&D in food processing for product and process development and improved packaging. To provide policy support, promotional initiatives and physical facilities to promote value added exports Roles of MOFPI The strategic role and functions of the Ministry fall under three categories: Policy support developmental & promotional Technical & advisory Regulatory. It is concerned with the formulation & implementation of policies and plans for all the industries under its domain within the overall national priorities and objectives. Its main focus areas include—development of infrastructure, technological up gradation, development of backward linkages, enforcement of quality standards and expanding
https://en.wikipedia.org/wiki/Ministry%20of%20Health%20and%20Family%20Welfare
The Ministry of Health and Family Welfare, also known by its abbreviation MoHFW, is an Indian government ministry charged with health policy in India. It is also responsible for all government programs relating to family planning in India. The Minister of Health and Family Welfare holds cabinet rank as a member of the Council of Ministers. The current minister is Mansukh L. Mandaviya, while the current Minister of State for health (MOS: assistant to Minister i.e. currently assistant to Mansukh L. Mandaviya) are Dr Bharati Pawar and S. P. Singh Baghel . Since 1955 the Ministry regularly publishes the Indian Pharmacopoeia through the Indian Pharmacopoeia Commission (IPC), an autonomous body for setting standards for drugs, pharmaceuticals and healthcare devices and technologies in India. Organisation The ministry is composed of two departments: Department of Health and Family Welfare and the Department of Health Research. Department of Health The Department of Health deals with health care, including awareness campaigns, immunisation campaigns, preventive medicine, and public health. Bodies under the administrative control of this department are: National AIDS Control Organisation (NACO) (see HIV/AIDS in India) 14 National Health Programmes National AIDS Control Programme (AIDS) Department Of Aids Control (National AIDS Control Organisation) (Details About Aids) National Cancer Control Programme (cancer) (since 1985) National Filaria Control Programme (filariasis) National Iodine Deficiency Disorders Control Programme (iodine deficiency) National Leprosy Eradication Programme (leprosy) National Mental Health Programme (mental health) National Programme for Control of Blindness (blindness) National Programme for Prevention and Control of Deafness (deafness) National Tobacco Control Programme (tobacco control) National Vector Borne Disease Control Programme (NVBDCP) (vector-borne disease) Pilot Programme on Prevention and Control of Diabetes, CVD and S
https://en.wikipedia.org/wiki/Dunkerley%27s%20method
Dunkerley's method is used in mechanical engineering to determine the critical speed of a shaft-rotor system. Other methods include the Rayleigh–Ritz method. Whirling of a shaft No shaft can ever be perfectly straight or perfectly balanced. When an element of mass is offset from the axis of rotation, centrifugal force will tend to pull the mass outward. The elastic properties of the shaft will act to restore the “straightness”. If the frequency of rotation is equal to one of the resonant frequencies of the shaft, whirling will occur. In order to save the machine from failure, operation at such whirling speeds must be avoided. Whirling is a complex phenomenon that can include harmonics but we are only going to consider synchronous whirl, where the frequency of whirling is the same as the rotational speed. Dunkerley’s formula (approximation) The whirling frequency of a symmetric cross section of a given length between two points is given by: where: E = Young's modulus, I = second moment of area, m = mass of the shaft, L = length of the shaft between points. A shaft with weights added will have an angular velocity of N (RPM) equivalent as follows: See also Vibration Mechanical resonance Notes and references Mechanical engineering
https://en.wikipedia.org/wiki/Euclid%E2%80%93Mullin%20sequence
The Euclid–Mullin sequence is an infinite sequence of distinct prime numbers, in which each element is the least prime factor of one plus the product of all earlier elements. They are named after the ancient Greek mathematician Euclid, because their definition relies on an idea in Euclid's proof that there are infinitely many primes, and after Albert A. Mullin, who asked about the sequence in 1963. The first 51 elements of the sequence are 2, 3, 7, 43, 13, 53, 5, 6221671, 38709183810571, 139, 2801, 11, 17, 5471, 52662739, 23003, 30693651606209, 37, 1741, 1313797957, 887, 71, 7127, 109, 23, 97, 159227, 643679794963466223081509857, 103, 1079990819, 9539, 3143065813, 29, 3847, 89, 19, 577, 223, 139703, 457, 9649, 61, 4357, 87991098722552272708281251793312351581099392851768893748012603709343, 107, 127, 3313, 227432689108589532754984915075774848386671439568260420754414940780761245893, 59, 31, 211... These are the only known elements . Finding the next one requires finding the least prime factor of a 335-digit number (which is known to be composite). Definition The th element of the sequence, , is the least prime factor of The first element is therefore the least prime factor of the empty product plus one, which is 2. The third element is (2 × 3) + 1 = 7. A better illustration is the fifth element in the sequence, 13. This is calculated by (2 × 3 × 7 × 43) + 1 = 1806 + 1 = 1807, the product of two primes, 13 × 139. Of these two primes, 13 is the smallest and so included in the sequence. Similarly, the seventh element, 5, is the result of (2 × 3 × 7 × 43 × 13 × 53) + 1 = 1244335, the prime factors of which are 5 and 248867. These examples illustrate why the sequence can leap from very large to very small numbers. Properties The sequence is infinitely long and does not contain repeated elements. This can be proved using the method of Euclid's proof that there are infinitely many primes. That proof is constructive, and the sequence is the result of performing a
https://en.wikipedia.org/wiki/Bit-stream%20access
Bit-stream access refers to the situation where a wireline incumbent installs a high-speed access link to the customer's premises (e.g., by installing ADSL equipment in the local access network) and then makes this access link available to third parties, to enable them to provide high-speed services to customers. This type of access does not entail any third-party access to the copper pair in the local loop. The incumbent may also provide transmission services to its competitors, using its Asynchronous Transfer Mode (ATM) or IP network, to carry competitors' traffic from the digital subscriber line access multiplexer (DSLAM) to a higher level in the network hierarchy where new entrants may already have a point of presence (e.g. a transit switch location). Bit-stream handover points thus can be at various levels: Handover at DSLAM Handover at ATM-PoP Handover at IP level Bit-stream access is nowadays considered a key tool for opening competition in the broadband market. It enables competitors to offer their own products to consumers even if they do not operate the local loop (the last mile). Bit-stream access allows the new entrant to use the high-speed modems and other equipment provided by the incumbent and thus avoid maintenance and investments into the local loop. This affects the economics of the service and places restrictions on the type of modems that the customer of the new entrant can buy or rent. The main elements defining bit-stream access are the following: High-speed access link to the customer premises (end user part) provided by the incumbent and transmission capacity for broadband data in both direction, enabling new entrants to offer their own, value-added services to end users; New entrants have the possibility to differentiate their services by altering technical characteristics and/or the use of their own network. Thus, bit-stream access is a wholesale product consisting of the access (typically ADSL) and “backhaul” services of the (d
https://en.wikipedia.org/wiki/Visual%20calculus
Visual calculus, invented by Mamikon Mnatsakanian (known as Mamikon), is an approach to solving a variety of integral calculus problems. Many problems that would otherwise seem quite difficult yield to the method with hardly a line of calculation, often reminiscent of what Martin Gardner called "aha! solutions" or Roger Nelsen a proof without words. Description Mamikon devised his method in 1959 while an undergraduate, first applying it to a well-known geometry problem: find the area of a ring (annulus), given the length of a chord tangent to the inner circumference. Perhaps surprisingly, no additional information is needed; the solution does not depend on the ring's inner and outer dimensions. The traditional approach involves algebra and application of the Pythagorean theorem. Mamikon's method, however, envisions an alternate construction of the ring: first the inner circle alone is drawn, then a constant-length tangent is made to travel along its circumference, "sweeping out" the ring as it goes. Now if all the (constant-length) tangents used in constructing the ring are translated so that their points of tangency coincide, the result is a circular disk of known radius (and easily computed area). Indeed, since the inner circle's radius is irrelevant, one could just as well have started with a circle of radius zero (a point)—and sweeping out a ring around a circle of zero radius is indistinguishable from simply rotating a line segment about one of its endpoints and sweeping out a disk. Mamikon's insight was to recognize the equivalence of the two constructions; and because they are equivalent, they yield equal areas. Moreover, so long as it is given that the tangent length is constant, the two starting curves need not be circular—a finding not easily proven by more traditional geometric methods. This yields Mamikon's theorem: The area of a tangent sweep is equal to the area of its tangent cluster, regardless of the shape of the original curve. Applications
https://en.wikipedia.org/wiki/Cerf%20theory
In mathematics, at the junction of singularity theory and differential topology, Cerf theory is the study of families of smooth real-valued functions on a smooth manifold , their generic singularities and the topology of the subspaces these singularities define, as subspaces of the function space. The theory is named after Jean Cerf, who initiated it in the late 1960s. An example Marston Morse proved that, provided is compact, any smooth function can be approximated by a Morse function. Thus, for many purposes, one can replace arbitrary functions on by Morse functions. As a next step, one could ask, 'if you have a one-parameter family of functions which start and end at Morse functions, can you assume the whole family is Morse?' In general, the answer is no. Consider, for example, the one-parameter family of functions on given by At time , it has no critical points, but at time , it is a Morse function with two critical points at . Cerf showed that a one-parameter family of functions between two Morse functions can be approximated by one that is Morse at all but finitely many degenerate times. The degeneracies involve a birth/death transition of critical points, as in the above example when, at , an index 0 and index 1 critical point are created as increases. A stratification of an infinite-dimensional space Returning to the general case where is a compact manifold, let denote the space of Morse functions on , and the space of real-valued smooth functions on . Morse proved that is an open and dense subset in the topology. For the purposes of intuition, here is an analogy. Think of the Morse functions as the top-dimensional open stratum in a stratification of (we make no claim that such a stratification exists, but suppose one does). Notice that in stratified spaces, the co-dimension 0 open stratum is open and dense. For notational purposes, reverse the conventions for indexing the stratifications in a stratified space, and index the open str
https://en.wikipedia.org/wiki/Clustered%20file%20system
A clustered file system (CFS) is a file system which is shared by being simultaneously mounted on multiple servers. There are several approaches to clustering, most of which do not employ a clustered file system (only direct attached storage for each node). Clustered file systems can provide features like location-independent addressing and redundancy which improve reliability or reduce the complexity of the other parts of the cluster. Parallel file systems are a type of clustered file system that spread data across multiple storage nodes, usually for redundancy or performance. Shared-disk file system A shared-disk file system uses a storage area network (SAN) to allow multiple computers to gain direct disk access at the block level. Access control and translation from file-level operations that applications use to block-level operations used by the SAN must take place on the client node. The most common type of clustered file system, the shared-disk file system —by adding mechanisms for concurrency control—provides a consistent and serializable view of the file system, avoiding corruption and unintended data loss even when multiple clients try to access the same files at the same time. Shared-disk file-systems commonly employ some sort of fencing mechanism to prevent data corruption in case of node failures, because an unfenced device can cause data corruption if it loses communication with its sister nodes and tries to access the same information other nodes are accessing. The underlying storage area network may use any of a number of block-level protocols, including SCSI, iSCSI, HyperSCSI, ATA over Ethernet (AoE), Fibre Channel, network block device, and InfiniBand. There are different architectural approaches to a shared-disk filesystem. Some distribute file information across all the servers in a cluster (fully distributed). Examples Blue Whale Clustered file system (BWFS) Silicon Graphics (SGI) clustered file system (CXFS) Veritas Cluster File Sys
https://en.wikipedia.org/wiki/Viviani%27s%20theorem
Viviani's theorem, named after Vincenzo Viviani, states that the sum of the distances from any interior point to the sides of an equilateral triangle equals the length of the triangle's altitude. It is a theorem commonly employed in various math competitions, secondary school mathematics examinations, and has wide applicability to many problems in the real world. Proof This proof depends on the readily-proved proposition that the area of a triangle is half its base times its height—that is, half the product of one side with the altitude from that side. Let ABC be an equilateral triangle whose height is h and whose side is a. Let P be any point inside the triangle, and u, s, t the distances of P from the sides. Draw a line from P to each of A, B, and C, forming three triangles PAB, PBC, and PCA. Now, the areas of these triangles are , , and . They exactly fill the enclosing triangle, so the sum of these areas is equal to the area of the enclosing triangle. So we can write: and thus Q.E.D. Converse The converse also holds: If the sum of the distances from an interior point of a triangle to the sides is independent of the location of the point, the triangle is equilateral. Applications Viviani's theorem means that lines parallel to the sides of an equilateral triangle give coordinates for making ternary plots, such as flammability diagrams. More generally, they allow one to give coordinates on a regular simplex in the same way. Extensions Parallelogram The sum of the distances from any interior point of a parallelogram to the sides is independent of the location of the point. The converse also holds: If the sum of the distances from a point in the interior of a quadrilateral to the sides is independent of the location of the point, then the quadrilateral is a parallelogram. The result generalizes to any 2n-gon with opposite sides parallel. Since the sum of distances between any pair of opposite parallel sides is constant, it follows that the sum of a
https://en.wikipedia.org/wiki/Keystone%20effect
The keystone effect is the apparent distortion of an image caused by projecting it onto an angled surface. It is the distortion of the image dimensions, such as making a square look like a trapezoid, the shape of an architectural keystone, hence the name of the feature. In the typical case of a projector sitting on a table, and looking upwards to the screen, the image is larger at the top than on the bottom. Some areas of the screen may not be focused correctly as the projector lens is focused at the average distance only. In photography, the term is used to describe the apparent leaning of buildings towards the vertical centerline of the photo when shooting upwards, a common effect in architectural photography. Likewise, when taking photos looking down, e.g., from a skyscraper, buildings appear to get broader towards the top. The effect is usually corrected by either using special lenses in tilt–shift photography or in post-processing using modern image editing software. Theory The distortion suffered by the image depends on the angle of the projector to the screen, and the beam angle. The distortion (on a two-dimensional model, and for small focus angles) is best approximated by: where is the angle between the screen axis and the central ray from the projector, and is the width of the focus. From the formula, it is clear that there will be no distortion when is zero, or perpendicular to the screen. In stereo imaging In stereoscopy, two lenses are used to view the same subject image, each from a slightly different perspective, allowing a three-dimensional view of the subject. If the two images are not exactly parallel, this causes a keystone effect. This is particularly noticeable when the lenses are close to the subject, as with a stereo microscope, but is also a common problem with many 3D stereo camera lenses. Solving the problem The problem arises for screen projectors that don't have the depth of focus necessary to keep all lines (from top to
https://en.wikipedia.org/wiki/Clam%20dip
Clam dip is a dipping sauce and condiment prepared with clams, sour cream or cream cheese, and seasonings as primary ingredients. Various additional ingredients can be used. It is usually served chilled, although it is sometimes served hot or at room temperature. It is used as a dip for potato chips, crackers, bread, and crudités. Commercial varieties of clam dip are mass-produced by some companies and marketed to consumers in grocery stores and supermarkets. History In the early 1950s in the United States, the first televised recipe for clam dip appeared on the Kraft Music Hall show, a radio and television variety program broadcast on NBC from 1933 to 1971. After the recipe segment aired, canned clams in New York City reportedly sold out within 24 hours. The ingredients used in this recipe were minced canned clams, cream cheese, lemon juice, Worcestershire sauce, garlic, salt and pepper. Clam dip remained popular throughout the 1960s and 1970s in the U.S., at which time prepared manufactured clam dips were available in U.S. supermarkets. As various tomato-based salsas gained more popularity with American consumers beginning in the late 1980s and 1990s, the popularity of clam dip and similar dips made with sour cream and cream cheese declined. Preparation Clam dip is typically prepared using chopped or minced clams, sour cream or cream cheese, and various seasonings, and usually served chilled. It is used as a dip for potato chips, bread, crackers, and crudités. It has a creamy texture and mouthfeel. Canned, cooked, and frozen or fresh clams may be used, the latter of which can be cooked by steaming or pan cooking. Canned clams can be drained, or the liquid can be retained and used as an ingredient. After refrigeration, the dip may thicken, and the liquid from canned clams can be used to thin the dip. Milk or cream is also sometimes used to thin clam dip. When refrigerated overnight, the flavors of the ingredients intermingle more greatly, resulting in a more fl
https://en.wikipedia.org/wiki/Fluhrer%2C%20Mantin%20and%20Shamir%20attack
In cryptography, the Fluhrer, Mantin and Shamir attack is a stream cipher attack on the widely used RC4 stream cipher. The attack allows an attacker to recover the key in an RC4 encrypted stream from a large number of messages in that stream. The Fluhrer, Mantin and Shamir attack applies to specific key derivation methods, but does not apply in general to RC4-based SSL (TLS), since SSL generates the encryption keys it uses for RC4 by hashing, meaning that different SSL sessions have unrelated keys. However, the closely related bar mitzvah attack, based on the same research and revealed in 2015, does exploit those cases where weak keys are generated by the SSL keying process. Background The Fluhrer, Mantin and Shamir (FMS) attack, published in their 2001 paper "Weaknesses in the Key Scheduling Algorithm of RC4", takes advantage of a weakness in the RC4 key scheduling algorithm to reconstruct the key from encrypted messages. The FMS attack gained popularity in network attack tools including AirSnort, weplab, and aircrack, which use it to recover the key used by WEP protected wireless networks. This discussion will use the below RC4 key scheduling algorithm (KSA). begin ksa(with int keylength, with byte key[keylength]) for i from 0 to 255 S[i] := i endfor j := 0 for i from 0 to 255 j := (j + S[i] + key[i mod keylength]) mod 256 swap(S[i],S[j]) endfor end The following pseudo-random generation algorithm (PRGA) will also be used. begin prga(with byte S[256]) i := 0 j := 0 while GeneratingOutput: i := (i + 1) mod 256 j := (j + S[i]) mod 256 swap(S[i],S[j]) output S[(S[i] + S[j]) mod 256] endwhile end The attack The basis of the FMS attack lies in the use of weak initialization vectors (IVs) used with RC4. RC4 encrypts one byte at a time with a keystream output from prga(); RC4 uses the key to initialize a state machine via ksa(), and then continuously
https://en.wikipedia.org/wiki/Sonchus%20oleraceus
Sonchus oleraceus is a species of flowering plant in the tribe Cichorieae of the family Asteraceae, native to Europe and Western Asia. It has many common names including common sowthistle, sow thistle, smooth sow thistle, annual sow thistle, hare's colwort, hare's thistle, milky tassel, milk thistle. and soft thistle. Its specific epithet means "vegetable/herbal". The common name 'sow thistle' refers to its attractiveness to pigs, and the similarity of the leaf to younger thistle plants. The common name 'hare's thistle' refers to its purported beneficial effects on hare and rabbits. Botanical characteristics This annual plant has a hollow, upright stem up to high. It prefers full sun, and can tolerate most soil conditions. The flowers are hermaphroditic, and common pollinators include bees and flies. It spreads by seeds being carried by wind or water. This plant is considered an invasive species in many parts of the world, where it is found mostly in disturbed areas. In Australia it is a common and widespread invasive species, with large infestations a serious problem in crops. Cuisine Leaves are eaten as salad greens or cooked like spinach. This is one of the species used in Chinese cuisine as kŭcài (苦菜; lit. bitter vegetable). The younger leaves are less bitter and better to eat raw. Steaming can remove the bitterness of older leaves. The younger roots are also edible and can suffice as a coffee substitute. Nutritive qualities Nutritional analysis reveals 30 – 40 mg of vitamin C per 100g of plant, 1.2% protein, 0.3% fat, 2.4% carbohydrate. Leaf dry matter analysis per 100 g (likely to vary with growing conditions) shows: 45 g carbohydrate, 28 g protein, 22 g ash, 5.9 g fibre, 4.5 g fat; in all, providing 265 calories. Minerals Calcium: 1500 mg Phosphorus: 500 mg Iron: 45.6 mg Magnesium: 0 mg Sodium: 0 mg Potassium: 0 mg Zinc: 0 mg Vitamins A: 35 mg Thiamine (B1): 1.5 mg Riboflavin (B2): 5 mg Niacin: 5 mg B6: 0 mg C: 60 mg Herbalism Sonchus oleraceu
https://en.wikipedia.org/wiki/Freund%E2%80%93Rubin%20compactification
Freund–Rubin compactification is a form of dimensional reduction in which a field theory in d-dimensional spacetime, containing gravity and some field whose field strength is a rank s antisymmetric tensor, 'prefers' to be reduced down to a spacetime with a dimension of either s or d-s. Derivation Consider general relativity in d spacetime dimensions. In the presence of an antisymmetric tensor field (without external sources), the Einstein field equations, and the equations of motion for the antisymmetric tensor are Where the stress–energy tensor takes the form Being a rank s antisymmetric tensor, the field strength has a natural ansatz for its solution, proportional to the Levi-Civita tensor on some s-dimensional manifold. Here, the indices run over s of the dimensions of the ambient d-dimensional spacetime, is the determinant of the metric of this s-dimensional subspace, and is some constant with dimensions of mass-squared (in natural units). Since the field strength is nonzero only on an s-dimensional submanifold, the metric is naturally separated into two parts, of block-diagonal form with , , and extending over the same s dimensions as the field strength , and , , and covering the remaining d-s dimensions. Having separated our d dimensional space into the product of two subspaces, Einstein's field equations allow us to solve for the curvature of these two sub-manifolds, and we find We find that the Ricci curvatures of the s- and (d-s)-dimensional sub-manifolds are necessarily opposite in sign. One must have positive curvature, and the other must have negative curvature, and so one of these manifolds must be compact. Consequently, at scales significantly larger than that of the compact manifold, the universe appears to have either s or (d-s) dimensions, as opposed to the underlying d. As an important example of this, 11D-Supergravity contains a 3-form antisymmetric tensor with a 4-form field strength, and consequently prefers to compactify 7 o
https://en.wikipedia.org/wiki/IPC%20%28electronics%29
IPC is a trade association whose aim is to standardize the assembly and production requirements of electronic equipment and assemblies. It was founded in 1957 as the Institute of Printed Circuits. Its name was later changed to the Institute for Interconnecting and Packaging Electronic Circuits to highlight the expansion from bare boards to packaging and electronic assemblies. In 1999, the organization formally changed its name to IPC with the accompanying tagline, Association Connecting Electronics Industries. IPC is accredited by the American National Standards Institute (ANSI) as a standards developing organization and is known globally for its standards. It publishes the most widely used acceptability standards in the electronics industry. IPC is headquartered in Bannockburn, Illinois, United States with additional offices in Washington, D.C. and Atlanta, Ga. in the United States, and overseas offices in China, Thailand, Vietnam, India and Belgium. Standards IPC standards are used by the electronics manufacturing industry. IPC-A-610, Acceptability of Electronic Assemblies, is used worldwide by original equipment manufacturers and EMS companies. There are more than 3600 trainers worldwide who are certified to train and test on the standard. Standards are created by committees of industry volunteers. Task groups have been formed in China, the United States, and Denmark. Standards published by IPC include: General documents IPC-T-50 Terms and Definitions IPC-2615 Printed Board Dimensions and Tolerances IPC-D-325 Documentation Requirements for Printed Boards IPC-A-31 Flexible Raw Material Test Pattern IPC-ET-652 Guidelines and Requirements for Electrical Testing of Unpopulated Printed Boards Design specifications IPC-2612 Sectional Requirements for Electronic Diagramming Documentation (Schematic and Logic Descriptions) IPC-2141A Design Guide for High-Speed Controlled Impedance Circuit Boards IPC-2221 Generic Standard on Printed Board Design IPC-2223 S
https://en.wikipedia.org/wiki/Eigenvalue%20perturbation
In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system that is perturbed from one with known eigenvectors and eigenvalues . This is useful for studying how sensitive the original system's eigenvectors and eigenvalues are to changes in the system. This type of analysis was popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities. The derivations in this article are essentially self-contained and can be found in many texts on numerical linear algebra or numerical functional analysis. This article is focused on the case of the perturbation of a simple eigenvalue (see in multiplicity of eigenvalues). Why generalized eigenvalues? In the entry applications of eigenvalues and eigenvectors we find numerous scientific fields in which eigenvalues are used to obtain solutions. Generalized eigenvalue problems are less widespread but are a key in the study of vibrations. They are useful when we use the Galerkin method or Rayleigh-Ritz method to find approximate solutions of partial differential equations modeling vibrations of structures such as strings and plates; the paper of Courant (1943) is fundamental. The Finite element method is a widespread particular case. In classical mechanics, we may find generalized eigenvalues when we look for vibrations of multiple degrees of freedom systems close to equilibrium; the kinetic energy provides the mass matrix , the potential strain energy provides the rigidity matrix . To get details, for example see the first section of this article of Weinstein (1941, in French) With both methods, we obtain a system of differential equations or Matrix differential equation with the mass matrix , the damping matrix and the rigidity matrix . If we neglect the damping effect, we use , we can look for a solution of the following form ; we obtain that and are solution of the generalized eigenvalue proble
https://en.wikipedia.org/wiki/L%C2%B2%20cohomology
In mathematics, L2 cohomology is a cohomology theory for smooth non-compact manifolds M with Riemannian metric. It is defined in the same way as de Rham cohomology except that one uses square-integrable differential forms. The notion of square-integrability makes sense because the metric on M gives rise to a norm on differential forms and a volume form. L2 cohomology, which grew in part out of L2 d-bar estimates from the 1960s, was studied cohomologically, independently by Steven Zucker (1978) and Jeff Cheeger (1979). It is closely related to intersection cohomology; indeed, the results in the preceding cited works can be expressed in terms of intersection cohomology. Another such result is the Zucker conjecture, which states that for a Hermitian locally symmetric variety the L2 cohomology is isomorphic to the intersection cohomology (with the middle perversity) of its Baily–Borel compactification (Zucker 1982). This was proved in different ways by Eduard Looijenga (1988) and by Leslie Saper and Mark Stern (1990). See also Dirichlet form Dirichlet principle Riemannian manifold
https://en.wikipedia.org/wiki/GEC%204000%20series
The GEC 4000 was a series of 16/32-bit minicomputers produced by GEC Computers Ltd in the United Kingdom during the 1970s, 1980s and early 1990s. History GEC Computers was formed in 1968 as a business unit of the GEC conglomerate. It inherited from Elliott Automation the ageing Elliott 900 series, and needed to develop a new range of systems. Three ranges were identified, known internally as Alpha, Beta, and Gamma. Alpha appeared first and became the GEC 2050 8-bit minicomputer. Beta followed and became the GEC 4080. Gamma was never developed, so a few of its enhanced features were consequently pulled back into the 4080. The principal designer of the GEC 4080 was Dr. Michael Melliar-Smith and the principal designer of the 4060 and 4090 was Peter Mackley. The 4000 series systems were developed and manufactured in the UK at GEC Computers' Borehamwood offices in Elstree Way. Development and manufacture transferred to the company's new factories in Woodside Estate, Dunstable in the late 1970s. In 1979, GEC Computers was awarded the Queen's Award for Technical Achievement for the development of the 4000 series, particularly Nucleus. By 1991, the number of systems manufactured was falling off, so manufacture was transferred to GPT's Beeston, Nottinghamshire factory and development returned to Borehamwood. The last systems were manufactured around 1995. There were still a few GEC 4220 systems operating in 2018 with maintenance provided by Telent, and some GEC 4310 were operating until 2013. London Underground continues to use GEC 4190 systems in 2022. Nucleus The GEC 4000 series hardware and firmware included a pioneering facility known as Nucleus. Nucleus implements a number of features which are more usually implemented within an operating system kernel, and consequently operating systems running on GEC 4000 series systems do not need to directly provide these features themselves. Nucleus firmware cannot be reprogrammed by any code running on the system, and this m
https://en.wikipedia.org/wiki/Lafayette%20Mendel
Lafayette Benedict Mendel (February 5, 1872 – December 9, 1935) was an American biochemist known for his work in nutrition, with longtime collaborator Thomas B. Osborne, including the study of Vitamin A, Vitamin B, lysine and tryptophan. Life Mendel was born in Delhi, New York, son of Benedict Mendel, a merchant born in Aufhausen, Germany in 1833, and Pauline Ullman, born in Eschenau, Germany. His father immigrated to the United States from Germany in 1851, his mother in 1870. At 15, he won a New York State scholarship. Mendel studied classics, economics and the humanities, as well as biology and chemistry at Yale University and graduated with honors in 1891. He then began graduate work at the Sheffield Scientific School on a fellowship and studied physiological chemistry under Russell Henry Chittenden. He finished his Ph.D. 1893 after only two years; his thesis topic was the study of the seed storage protein edestin extracted from hemp seed. Upon graduation, he began as an assistant at the Sheffield School in Physiological chemistry. He also studied in Germany and was made an assistant professor on his return in 1896. He became a full professor in 1903 with appointments in the Yale School of Medicine and the Yale Graduate School as well as Sheffield. With Chittenden, Mendel became one of the founders of the science of nutrition. Together with longtime collaborator Thomas B. Osborne he established essential amino acids. As early as 1910 he found an important growth factor...later to be known as vitamin B. In 1903, at age 31, he was appointed full professor of physiological chemistry. In promoting Mendel, Yale made him one of the first high-ranking Jewish professors in the United States. Capping his illustrious career Mendel was appointed Sterling Professor of Physiological Chemistry in 1921. Of the twenty professors to be designated Sterling professors in the decade following their inception in 1920, only two were selected before Mendel. Of the twenty, M
https://en.wikipedia.org/wiki/Elliptic%20boundary%20value%20problem
In mathematics, an elliptic boundary value problem is a special kind of boundary value problem which can be thought of as the stable state of an evolution problem. For example, the Dirichlet problem for the Laplacian gives the eventual distribution of heat in a room several hours after the heating is turned on. Differential equations describe a large class of natural phenomena, from the heat equation describing the evolution of heat in (for instance) a metal plate, to the Navier-Stokes equation describing the movement of fluids, including Einstein's equations describing the physical universe in a relativistic way. Although all these equations are boundary value problems, they are further subdivided into categories. This is necessary because each category must be analyzed using different techniques. The present article deals with the category of boundary value problems known as linear elliptic problems. Boundary value problems and partial differential equations specify relations between two or more quantities. For instance, in the heat equation, the rate of change of temperature at a point is related to the difference of temperature between that point and the nearby points so that, over time, the heat flows from hotter points to cooler points. Boundary value problems can involve space, time and other quantities such as temperature, velocity, pressure, magnetic field, etc. Some problems do not involve time. For instance, if one hangs a clothesline between the house and a tree, then in the absence of wind, the clothesline will not move and will adopt a gentle hanging curved shape known as the catenary. This curved shape can be computed as the solution of a differential equation relating position, tension, angle and gravity, but since the shape does not change over time, there is no time variable. Elliptic boundary value problems are a class of problems which do not involve the time variable, and instead only depend on space variables. The main example In two dim
https://en.wikipedia.org/wiki/Society%20for%20Hematology%20and%20Stem%20Cells
The Society for Hematology and Stem Cells (formerly the International Society for Experimental Hematology) is a learned society which deals with hematology, the study of the blood system and its diseases, including those caused by exposure to nuclear radiation. It was founded in 1950, and held its first official meeting in Milwaukee in 1972. Its mission statement is: "To promote the scientific knowledge and clinical application of basic hematology, immunology, stem cell research, cell and gene therapy and related aspects of research through publications, discussions, scientific meetings and the support of young investigators." Dr. Margaret Goodell of the Baylor College of Medicine's Center for Cell and Gene Therapy is the current president. At the opening ceremony of the 30th annual meeting of ISEH, Emperor Akihito of Japan praised the "remarkable results obtained by the ISEH today in the treatment of radiation-related disorders", by contrast to the lack of any effective treatment for such disorders in 1945 when atomic bombs were dropped on Hiroshima and Nagasaki. The society has an official journal, Experimental Hematology, which has an impact factor of 2.907.
https://en.wikipedia.org/wiki/Immunoglobulin%20class%20switching
Immunoglobulin class switching, also known as isotype switching, isotypic commutation or class-switch recombination (CSR), is a biological mechanism that changes a B cell's production of immunoglobulin from one type to another, such as from the isotype IgM to the isotype IgG. During this process, the constant-region portion of the antibody heavy chain is changed, but the variable region of the heavy chain stays the same (the terms variable and constant refer to changes or lack thereof between antibodies that target different epitopes). Since the variable region does not change, class switching does not affect antigen specificity. Instead, the antibody retains affinity for the same antigens, but can interact with different effector molecules. Mechanism Class switching occurs after activation of a mature B cell via its membrane-bound antibody molecule (or B cell receptor) to generate the different classes of antibody, all with the same variable domains as the original antibody generated in the immature B cell during the process of V(D)J recombination, but possessing distinct constant domains in their heavy chains. Naïve mature B cells produce both IgM and IgD, which are the first two heavy chain segments in the immunoglobulin locus. After activation by antigen, these B cells proliferate. If these activated B cells encounter specific signaling molecules via their CD40 and cytokine receptors (both modulated by T helper cells), they undergo antibody class switching to produce IgG, IgA or IgE antibodies. During class switching, the constant region of the immunoglobulin heavy chain changes but the variable regions do not, and therefore antigenic specificity, remains the same. This allows different daughter cells from the same activated B cell to produce antibodies of different isotypes or subtypes (e.g. IgG1, IgG2 etc.). In humans, the order of the heavy chain exons is as follows: μ - IgM δ - IgD γ3 - IgG3 γ1 - IgG1 α1 - IgA1 γ2 - IgG2 γ4 - IgG4 ε - IgE α2
https://en.wikipedia.org/wiki/Optimal%20facility%20location
The study of facility location problems (FLP), also known as location analysis, is a branch of operations research and computational geometry concerned with the optimal placement of facilities to minimize transportation costs while considering factors like avoiding placing hazardous materials near housing, and competitors' facilities. The techniques also apply to cluster analysis. Minimum facility location A simple facility location problem is the Weber problem, in which a single facility is to be placed, with the only optimization criterion being the minimization of the weighted sum of distances from a given set of point sites. More complex problems considered in this discipline include the placement of multiple facilities, constraints on the locations of facilities, and more complex optimization criteria. In a basic formulation, the facility location problem consists of a set of potential facility sites L where a facility can be opened, and a set of demand points D that must be serviced. The goal is to pick a subset F of facilities to open, to minimize the sum of distances from each demand point to its nearest facility, plus the sum of opening costs of the facilities. The facility location problem on general graphs is NP-hard to solve optimally, by reduction from (for example) the set cover problem. A number of approximation algorithms have been developed for the facility location problem and many of its variants. Without assumptions on the set of distances between clients and sites (in particular, without assuming that the distances satisfy the triangle inequality), the problem is known as non-metric facility location and can be approximated to within a factor O(log n). This factor is tight, via an approximation-preserving reduction from the set cover problem. If we assume distances between clients and sites are undirected and satisfy the triangle inequality, we are talking about a metric facility location (MFL) problem. The MFL is still NP-hard and hard to
https://en.wikipedia.org/wiki/Schubert%20variety
In algebraic geometry, a Schubert variety is a certain subvariety of a Grassmannian, of -dimensional subspaces of a vector space , usually with singular points. Like the Grassmannian, it is a kind of moduli space, whose elements satisfy conditions giving lower bounds to the dimensions of the intersections of its elements , with the elements of a specified complete flag. Here may be a vector space over an arbitrary field, but most commonly this taken to be either the real or the complex numbers. A typical example is the set of -dimensional subspaces of a 4-dimensional space that intersect a fixed (reference) 2-dimensional subspace nontrivially. Over the real number field, this can be pictured in usual xyz-space as follows. Replacing subspaces with their corresponding projective spaces, and intersecting with an affine coordinate patch of , we obtain an open subset X° ⊂ X. This is isomorphic to the set of all lines L (not necessarily through the origin) which meet the x-axis. Each such line L corresponds to a point of X°, and continuously moving L in space (while keeping contact with the x-axis) corresponds to a curve in X°. Since there are three degrees of freedom in moving L (moving the point on the x-axis, rotating, and tilting), X is a three-dimensional real algebraic variety. However, when L is equal to the x-axis, it can be rotated or tilted around any point on the axis, and this excess of possible motions makes L a singular point of X. More generally, a Schubert variety in is defined by specifying the minimal dimension of intersection of a -dimensional subspace with each of the spaces in a fixed reference complete flag , where . (In the example above, this would mean requiring certain intersections of the line L with the x-axis and the xy-plane.) In even greater generality, given a semisimple algebraic group with a Borel subgroup and a standard parabolic subgroup , it is known that the homogeneous space , which is an example of a flag variety
https://en.wikipedia.org/wiki/Standard%20conjectures%20on%20algebraic%20cycles
In mathematics, the standard conjectures about algebraic cycles are several conjectures describing the relationship of algebraic cycles and Weil cohomology theories. One of the original applications of these conjectures, envisaged by Alexander Grothendieck, was to prove that his construction of pure motives gave an abelian category that is semisimple. Moreover, as he pointed out, the standard conjectures also imply the hardest part of the Weil conjectures, namely the "Riemann hypothesis" conjecture that remained open at the end of the 1960s and was proved later by Pierre Deligne; for details on the link between Weil and standard conjectures, see . The standard conjectures remain open problems, so that their application gives only conditional proofs of results. In quite a few cases, including that of the Weil conjectures, other methods have been found to prove such results unconditionally. The classical formulations of the standard conjectures involve a fixed Weil cohomology theory . All of the conjectures deal with "algebraic" cohomology classes, which means a morphism on the cohomology of a smooth projective variety induced by an algebraic cycle with rational coefficients on the product via the cycle class map, which is part of the structure of a Weil cohomology theory. Conjecture A is equivalent to Conjecture B (see , p. 196), and so is not listed. Lefschetz type Standard Conjecture (Conjecture B) One of the axioms of a Weil theory is the so-called hard Lefschetz theorem (or axiom): Begin with a fixed smooth hyperplane section , where is a given smooth projective variety in the ambient projective space and is a hyperplane. Then for , the Lefschetz operator , which is defined by intersecting cohomology classes with , gives an isomorphism . Now, for define: The conjecture states that the Lefschetz operator () is induced by an algebraic cycle. Künneth type Standard Conjecture (Conjecture C) It is conjectured that the projectors are algebraic, i.e.
https://en.wikipedia.org/wiki/Thermal%20contact
In heat transfer and thermodynamics, a thermodynamic system is said to be in thermal contact with another system if it can exchange energy through the process of heat. Perfect thermal isolation is an idealization as real systems are always in thermal contact with their environment to some extent. When two solid bodies are in contact, a resistance to heat transfer exists between the bodies. The study of heat conduction between such bodies is called thermal contact conductance (or thermal contact resistance).
https://en.wikipedia.org/wiki/Differentially%20closed%20field
In mathematics, a differential field K is differentially closed if every finite system of differential equations with a solution in some differential field extending K already has a solution in K. This concept was introduced by . Differentially closed fields are the analogues for differential equations of algebraically closed fields for polynomial equations. The theory of differentially closed fields We recall that a differential field is a field equipped with a derivation operator. Let K be a differential field with derivation operator ∂. A differential polynomial in x is a polynomial in the formal expressions x, ∂x, ∂2x, ... with coefficients in K. The order of a non-zero differential polynomial in x is the largest n such that ∂nx occurs in it, or −1 if the differential polynomial is a constant. The separant Sf of a differential polynomial of order n≥0 is the derivative of f with respect to ∂nx. The field of constants of K is the subfield of elements a with ∂a=0. In a differential field K of nonzero characteristic p, all pth powers are constants. It follows that neither K nor its field of constants is perfect, unless ∂ is trivial. A field K with derivation ∂ is called differentially perfect if it is either of characteristic 0, or of characteristic p and every constant is a pth power of an element of K. A differentially closed field is a differentially perfect differential field K such that if f and g are differential polynomials such that Sf≠ 0 and g≠0 and f has order greater than that of g, then there is some x in K with f(x)=0 and g(x)≠0. (Some authors add the condition that K has characteristic 0, in which case Sf is automatically non-zero, and K is automatically perfect.) DCFp is the theory of differentially closed fields of characteristic p (where p is 0 or a prime). Taking g=1 and f any ordinary separable polynomial shows that any differentially closed field is separably closed. In characteristic 0 this implies that it is algebraically closed, but i
https://en.wikipedia.org/wiki/Maynard%20Electronics
Maynard Electronics was an American company based in Lake Mary, Florida that produced magnetic tape data storage related products. The company was founded by Kim and Alison Knapp in 1982. It was acquired by Archive Corp. in 1989, but the brand was maintained. In order to make it easier to sell tape drives, the company created driver software that came to be called MaynStream. After Conner Peripherals acquired Archive, the software product was renamed Backup Exec. External links History of Backup Exec. 1982 establishments in Florida 1989 establishments in Florida American companies established in 1982 American companies disestablished in 1989 Companies based in Seminole County, Florida Computer companies established in 1982 Computer companies disestablished in 1989 Computer storage companies Defunct companies based in Florida Defunct computer companies of the United States Manufacturing companies based in Florida
https://en.wikipedia.org/wiki/Spraint
Spraint is the dung of the otter. Spraints are typically identified by smell and are known for their distinct odors, the smell of which has been described as ranging from freshly mown hay to putrefied fish. The Eurasian otter's spraints are black and slimy, long and deposited in groups of up to four in prominent locations near water. They contain scales, shells and bones of water creatures. Because of the decline of otters in Britain, several surveys have been made to record the distribution of the animal, usually by recording the presence of spraint.
https://en.wikipedia.org/wiki/Panda%20pornography
Panda pornography (or panda porn) refers generally to movies depicting mating pandas, intended to promote sexual arousal in captive giant pandas. Under zoo conditions, the animals have, in general, proven unenthusiastic about mating, placing their species in danger of extinction. However, in their natural habitat in the wild, pandas are much more successful at mating, particularly as individuals are able to select for behavioural compatibility, as opposed to researchers choosing couples for genetic diversity purposes and trying to predict the narrow window when the females are in the mood. History The method was popularized following reports of an experiment performed by zoologists in Thailand, in which they showed several captive giant pandas at Chiang Mai Zoo a number of videos showing other giant pandas mating. Though the researchers behind the project state that they believe there have been successful mating due to usage of mating videos for the animals, such success so far has not been achieved outside of China, where 31 cubs were born over a ten-month period following commencement of the experiment. Other methods, including the use of Viagra to sexually stimulate pandas, have thus far been unsuccessful. Still, panda pornography is thought to be hardly used by most zoo-keepers, as the animals have poor eyesight, making scent and sound more important. The director of giant panda breeding in Chengdu, Zhang Zhihe, reports that the mating time of pandas ranges from 30 seconds to several minutes. In terms of pornography, recorded footage of mating pandas with both sights and sounds is used to draw the interest pandas. In addition, pandas are made to do specialized exercises that strengthen the males' hind legs and stamina.
https://en.wikipedia.org/wiki/Resistance%20mutation%20%28virology%29
A resistance mutation is a mutation in a virus gene that allows the virus to become resistant to treatment with a particular antiviral drug. The term was first used in the management of HIV, the first virus in which genome sequencing was routinely used to look for drug resistance. At the time of infection, a virus will infect and begin to replicate within a preliminary cell. As subsequent cells are infected, random mutations will occur in the viral genome. When these mutations begin to accumulate, antiviral methods will kill the wild type strain, but will not be able to kill one or many mutated forms of the original virus. At this point a resistance mutation has occurred because the new strain of virus is now resistant to the antiviral treatment that would have killed the original virus. Resistance mutations are evident and widely studied in HIV due to its high rate of mutation and prevalence in the general population. Resistance mutation is now studied in bacteriology and parasitology. Mechanisms Resistance mutations can occur through several mechanisms from single nucleotide substitutions to combinations of amino acid substitutions, deletions and insertions. Over time, these new genetic lines will persist if they become resistant to treatments being used against them. It has been shown that pathogens will favor and become more resistant to treatment in common host genotypes through frequency-dependent selection. Further, strict adherence to a retroviral regimen correlates to a strong decrease in retroviral resistance mutations. There are five classes of drug that are used to treat HIV infection, and resistance mutations can effect the efficacy of these treatments as well. Entry inhibitors block the ability of HIV to enter its target cells. HIV must bind to a CD4 receptor on a T cell or the CCR5/CXCR4 co-receptors to enter the cell. They can also block the fusion of the viral and cell membranes. Entry inhibitors modify protein residues on the membrane of the T ce
https://en.wikipedia.org/wiki/Harald%20Ganzinger
Harald Ganzinger (31 October 1950, Werneck – 3 June 2004, Saarbrücken) was a German computer scientist who together with Leo Bachmair developed the superposition calculus, which is (as of 2007) used in most of the state-of-the-art automated theorem provers for first-order logic. He received his Ph.D. from the Technical University of Munich in 1978. Before 1991 he was a Professor of Computer Science at University of Dortmund. Then he joined the Max Planck Institute for Computer Science in Saarbrücken shortly after it was founded in 1991. Until 2004 he was the Director of the Programming Logics department of the Max Planck Institute for Computer Science and honorary professor at Saarland University. His research group created the SPASS automated theorem prover. He received the Herbrand Award in 2004 (posthumous) for his important contributions to automated theorem proving.
https://en.wikipedia.org/wiki/Kvant-1
Kvant-1 (; English: Quantum-I/1) (37KE) was the first module to be attached in 1987 to the Mir Core Module, which formed the core of the Soviet space station Mir. It remained attached to Mir until the entire space station was deorbited in 2001. The Kvant-1 module contained scientific instruments for astrophysical observations and materials science experiments. It was used to conduct research into the physics of active galaxies, quasars and neutron stars and it was uniquely positioned for studies of the Supernova SN 1987A. Furthermore, it supported biotechnology experiments in anti-viral preparations and fractions. Some additions to Kvant-1 during its lifetime were solar arrays and the Sofora and Rapana girders. The Kvant-1 module was based on the TKS spacecraft and was the first, experimental version of a planned series of '37K' type modules. The 37K modules featured a jettisonable TKS-E type propulsion module, also called the Functional Service Module (FSM). The control system of Kvant-1 had been developed by NPO "Electropribor" (Kharkiv, Ukraine). After previous engineering tests with the Salyut 6 and Salyut 7 space stations (and temporarily attached TKS-derived space station modules like Kosmos 1267, Kosmos 1443 and Kosmos 1686) it became the first space station module to be attached semi-permanently to the first modular space station in the history of space flight. Kvant-1 was originally planned to be docked to the Salyut 7 space station, the plans however evolved to launch to Mir, initially considered on board the Soviet Buran space shuttle, which finally changed to a launch to Mir by the Proton-K rocket. Background The Kvant spacecraft represented the first use of a new kind of Soviet space station module, designated 37K. An order authorising the beginning of development was issued on 17 September 1979. The basic 37K design consisted of a 4.2 m diameter pressurised cylinder with a docking port at the forward end. It was not equipped with its own propuls
https://en.wikipedia.org/wiki/Concurrency%20and%20Coordination%20Runtime
Concurrency and Coordination Runtime (CCR) is an asynchronous programming library based on .NET Framework from Microsoft distributed with Microsoft Robotics Developer Studio (MRDS). Even though it comes with MRDS, it is not limited to modelling robotic behavior but can be used to express asynchronous behavior in any application. CCR runtime includes a Dispatcher class that implements a Thread pool, with a fixed number of threads, all of which can execute simultaneously. Each dispatcher includes a queue (called DispatcherQueue) of delegates, which represent the entry point to a procedure (called work item) that can be executed asynchronously. The work items are then distributed across the threads for execution. A dispatcher object also contains a generic Port which is a queue where the result of the asynchronous execution of a work item is put. Each work item can be associated with a ReceiverTask object which consumes the result for further processing. An Arbiter manages the ReceiverTask and invokes them when the result they are expecting is ready and put on the Port queue. In May 2010, the CCR was made available along with the entire Robotics Developer Studio in one package, for free. Microsoft Robotics Developer Studio 2008 R3. CCR was last updated in RDS R4 in 2012. It is no longer under development. Asynchronous programming is now supported in Visual Studio languages such as C# through built-in language features. See also Parallel Extensions Joins Microsoft Robotics Developer Studio
https://en.wikipedia.org/wiki/H2AFX
H2A histone family member X (usually abbreviated as H2AX) is a type of histone protein from the H2A family encoded by the H2AFX gene. An important phosphorylated form is γH2AX (S139), which forms when double-strand breaks appear. In humans and other eukaryotes, the DNA is wrapped around histone octamers, consisting of core histones H2A, H2B, H3 and H4, to form chromatin. H2AX contributes to nucleosome-formation, chromatin-remodeling and DNA repair, and is also used in vitro as an assay for double-strand breaks in dsDNA. Formation of γH2AX H2AX becomes phosphorylated on serine 139, then called γH2AX, as a reaction on DNA double-strand breaks (DSB). The kinases of the PI3-family (Ataxia telangiectasia mutated, ATR and DNA-PKcs) are responsible for this phosphorylation, especially ATM. The modification can happen accidentally during replication fork collapse or in the response to ionizing radiation but also during controlled physiological processes such as V(D)J recombination. γH2AX is a sensitive target for looking at DSBs in cells. The presence of γH2AX by itself, however, is not the evidence of the DSBs. The role of the phosphorylated form of the histone in DNA repair is under discussion but it is known that because of the modification the DNA becomes less condensed, potentially allowing space for the recruitment of proteins necessary during repair of DSBs. Mutagenesis experiments have shown that the modification is necessary for the proper formation of ionizing radiation induced foci in response to double strand breaks, but is not required for the recruitment of proteins to the site of DSBs. Function DNA damage response The histone variant H2AX constitutes about 2-25% of the H2A histones in mammalian chromatin. When a double-strand break occurs in DNA, a sequence of events occurs in which H2AX is altered. Very early after a double-strand break, a specific protein that interacts with and affects the architecture of chromatin is phosphorylated and then relea
https://en.wikipedia.org/wiki/Index%20group
In operator theory, a branch of mathematics, every Banach algebra can be associated with a group called its abstract index group. Definition Let A be a Banach algebra and G the group of invertible elements in A. The set G is open and a topological group. Consider the identity component G0, or in other words the connected component containing the identity 1 of A; G0 is a normal subgroup of G. The quotient group ΛA = G/G0 is the abstract index group of A. Because G0, being the component of an open set, is both open and closed in G, the index group is a discrete group. Examples Let L(H) be the Banach algebra of bounded operators on a Hilbert space. The set of invertible elements in L(H) is path connected. Therefore, ΛL(H) is the trivial group. Let T denote the unit circle in the complex plane. The algebra C(T) of continuous functions from T to the complex numbers is a Banach algebra, with the topology of uniform convergence. A function in C(T) is invertible (meaning that it has a pointwise multiplicative inverse, not that it is an invertible function) if it does not map any element of T to zero. The group G0 consists of elements homotopic, in G, to the identity in G, the constant function 1. One can choose the functions fn(z) = zn as representatives in G of distinct homotopy classes of maps T→T. Thus the index group ΛC(T) is the set of homotopy classes, indexed by the winding number of its members. Thus ΛC(T) is isomorphic to the fundamental group of T. It is a countable discrete group. The Calkin algebra K is the quotient C*-algebra of L(H) with respect to the compact operators. Suppose π is the quotient map. By Atkinson's theorem, an invertible elements in K is of the form π(T) where T is a Fredholm operators. The index group ΛK is again a countable discrete group. In fact, ΛK is isomorphic to the additive group of integers Z, via the Fredholm index. In other words, for Fredholm operators, the two notions of index coincide.
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20of%20Immunobiology%20and%20Epigenetics
The Max Planck Institute of Immunobiology and Epigenetics (German: Max-Planck-Institut für Immunbiologie und Epigenetik) in Freiburg, Germany is an interdisciplinary research institute that conducts basic research in modern immunobiology, developmental biology and epigenetics. It was founded in 1961 as the Max Planck Institute of Immunobiology and is one of 86 institutions of the Max Planck Society. Originally named the Max Planck Institute of Immunobiology, it was renamed to its current name in 2010 as it widened its research thrusts to the study of epigenetics. The researchers of the institute study the development of the immune system and analyse the genes and molecules which are important for its function. They also seek to establish which factors control the maturation of immune cells and how chemical changes of the DNA influence the immune defense. The biologist Georges J. F. Köhler, a co-recipient of the 1984 Nobel Prize in Physiology or Medicine, was director of the institute from 1984 until his death in 1995. History The institute was founded in 1961 and grew out of the research activities of the pharmaceutical company Wander AG in Freiburg. By the 1970s, MPIIE was engaged in studies focusing on interactions between infectious agents, particularly endotoxin, and the human immune system. The research scope was then expanded into cellular and molecular mechanisms of B and T cells in the next decade. From the 1990s, the institute focused increasingly on genetic imprinting and epigenetics. The research fields were later expanded to include molecular mechanisms of lymphoid cell differentiation and the regulation of genes via extracellular signals. In 2007, the Max Planck Institute of Immunobiology included epigenetics as a new research department and thus the institute was formally renamed the Max Planck Institute of Immunobiology and Epigenetics in 2010. Organization The Max Planck Institute of Immunobiology and Epigenetics is organised into four department
https://en.wikipedia.org/wiki/Application%20delivery%20network
An application delivery network (ADN) is a suite of technologies that, when deployed together, provide availability, security, visibility, and acceleration for Internet applications such as websites. ADN components provide supporting functionality that enables website content to be delivered to visitors and other users of that website, in a fast, secure, and reliable way. Gartner defines application delivery networking as the combination of WAN optimization controllers (WOCs) and application delivery controllers (ADCs). At the data center end of an ADN is the ADC, an advanced traffic management device that is often also referred to as a web switch, content switch, or multilayer switch, the purpose of which is to distribute traffic among a number of servers or geographically dislocated sites based on application specific criteria. In the branch office portion of an ADN is the WAN optimization controller, which works to reduce the number of bits that flow over the network using caching and compression, and shapes TCP traffic using prioritization and other optimization techniques. Some WOC components are installed on PCs or mobile clients, and there is typically a portion of the WOC installed in the data center. Application delivery networks are also offered by some CDN vendors. The ADC, one component of an ADN, evolved from layer 4-7 switches in the late 1990s when it became apparent that traditional load balancing techniques were not robust enough to handle the increasingly complex mix of application traffic being delivered over a wider variety of network connectivity options. Application delivery techniques The Internet was designed according to the end-to-end principle. This principle keeps the core network relatively simple and moves the intelligence as much as possible to the network end-points: the hosts and clients. An Application Delivery Network (ADN) enhances the delivery of applications across the Internet by employing a number of optimization techniqu
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Infection%20Biology
The Max Planck Institute for Infection Biology (MPIIB) is a non-university research institute of the Max Planck Society located in the heart of Berlin in Berlin-Mitte. It was founded in 1993. Arturo Zychlinsky is currently the Managing Director. The MPIIB is divided into nine research groups, two partner groups and two Emeritus Groups of the founding director Stefan H. E. Kaufmann and the director emeritus Thomas F. Meyer. The department "Regulation in Infection Biology" headed by 2020 Nobel laureate Emmanuelle Charpentier was hived off as an independent research center in May 2018. The Max Planck Unit for the Science of Pathogens is now administratively independent of the Max Planck Institute for Infection Biology. In October 2019, Igor Iatsenko and Matthieu Domenech de Cellès established their research groups at the institute, Mark Cronan started his position as research group leader in March 2020. Silvia Portugal joined the institute in June 2020 as Lise Meitner Group Leader. Two more research groups where added in 2020, Felix M. Key joined in September and Olivia Majer in October, completing the reorganization of the Max Planck Institute for Infection Biology. Simone Reber joined as Max Planck Fellow in 2023 and now heads the research group Quantitative Biology. Research Groups Mark Cronan heads the research group "In vivo cell biology of infections". The group is investigating how granulomas develop in the course of a tuberculosis infection and how host-directed therapies can be used to protect host organisms against infections. Matthieu Domenech de Cellès is the leader of the research group "Infectious Disease Epidemiology". Their focuses on the population biology of infectious diseases, with a view to understanding how individual-level mechanisms of infection translate into population-level dynamics. Igor Iatsenko heads the research group "Genetics of Host-Microbe Interactions". Its aim is to understand the mechanisms of how the host discriminates and r
https://en.wikipedia.org/wiki/T-J%20model
In solid-state physics, the t-J model is a model first derived in 1977 from the Hubbard model by Józef Spałek to explain antiferromagnetic properties of the Mott insulators and taking into account experimental results about the strength of electron-electron repulsion in this materials. The model consider the materials as a lattice with atoms in the knots (sites) and just one or two external electrons moving among them (internal electrons are not considered), like in the basic Hubbard model. That difference is in supposing electrons being strongly-correlated, that means electrons are very sensible to reciprocal coulombic repulsion, and so are more constrained to avoid occupying lattice's sites already occupied by another electron. In the basic Hubbard model, the repulsion, indicated with U, can be small and also null, and electrons are freer to jump (hopping, parametrized by t as transfer or tunnel) from one site to another. In the t-J model, instead of U, there is the parameter J, function of the ratio t/U, so the name. It is used as a possible model to explain high temperature superconductivity in doped antiferromagnets, in the hypothesis of strong coupling between electrons. The Hamiltonian In quantum physics system's models are usually based on the Hamiltonian operator , corresponding to the total energy of that system, including both kinetic energy and potential energy. The t-J Hamiltonian can be derived from the of the Hubbard model using the Schrieffer–Wolff transformation, with the transformation generator depending on t/U and excluding the possibility for electrons to doubly occupy a lattice's site, which results in: where the term in t corresponds to the kinetic energy and is equal to the one in the Hubbard model. The second one is the potential energy approximated at the second order, because this is an approximation of the Hubbard model in the limit U >> t developed in power of t. Terms at higher order can be added. The parameters are: is the sum
https://en.wikipedia.org/wiki/Japanese%20Journal%20of%20Applied%20Physics
The Japanese Journal of Applied Physics is a peer-reviewed scientific journal that was established in 1962 and is published by the Japan Society of Applied Physics. From 1982 until 2008, the journal was published in two editions, Part 1 and Part 2: Part 1 was published monthly and was for regular papers, short notes and review papers. Part 2 was published semi-monthly and was for letters and express letters. In 2008, Part 2 was separated as an independent journal and renamed Applied Physics Express. Part 1 continues to be published as the Japanese Journal of Applied Physics. In June 2013, the Japan Society of Applied Physics signed an agreement with IOP Publishing for its journals to be published by IOP Publishing. See also Applied Physics Express
https://en.wikipedia.org/wiki/Japan%20Society%20of%20Applied%20Physics
(JSAP) is a Japanese group of researchers in the field of applied physics. JSAP originated in 1932 from a voluntary forum of researchers belonging to the University of Tokyo and the Institute of Physical and Chemical Research. During World War II, most research, even applied, was frozen. In 1946, the society was established as an official academic society. Oyo Buturi Oyo Buturi () is the membership subscription of the Japan Society of Applied Physics. It is published monthly, in Japanese. Oyo Buturi International (1998) and JSAP International (2000-2008) are related English counterparts to Oyo Buturi. Publications of the Japan Society of Applied Physics Japanese Journal of Applied Physics Applied Physics Express Optical Review Oyo Buturi Oyo Buturi International JSAP International See also The Physical Society of Japan Optical Society of Japan
https://en.wikipedia.org/wiki/Thor-CD
Thor-CD was a re-recordable CD format proposed in 1988 by Tandy. Several years before recordable compact discs were introduced, Tandy Corporation announced a similar CD format named Thor-CD, but after being pushed back for several years, it was finally cancelled due to steep manufacturing costs. At the time Tandy proposed the new format, CDs were mostly used for digital music, but not for other digital data. Tandy aimed to change this with its new format. However, the introduction of the CD-ROM format, which was incompatible with Tandy's proposal, all but killed Tandy's product. See also Vaporware
https://en.wikipedia.org/wiki/Skoda%E2%80%93El%20Mir%20theorem
The Skoda–El Mir theorem is a theorem of complex geometry, stated as follows: Theorem (Skoda, El Mir, Sibony). Let X be a complex manifold, and E a closed complete pluripolar set in X. Consider a closed positive current on which is locally integrable around E. Then the trivial extension of to X is closed on X. Notes
https://en.wikipedia.org/wiki/Riemann%E2%80%93Hilbert%20correspondence
In mathematics, the term Riemann–Hilbert correspondence refers to the correspondence between regular singular flat connections on algebraic vector bundles and representations of the fundamental group, and more generally to one of several generalizations of this. The original setting appearing in Hilbert's twenty-first problem was for the Riemann sphere, where it was about the existence of systems of linear regular differential equations with prescribed monodromy representations. First the Riemann sphere may be replaced by an arbitrary Riemann surface and then, in higher dimensions, Riemann surfaces are replaced by complex manifolds of dimension > 1. There is a correspondence between certain systems of partial differential equations (linear and having very special properties for their solutions) and possible monodromies of their solutions. Such a result was proved for algebraic connections with regular singularities by Pierre Deligne (1970, generalizing existing work in the case of Riemann surfaces) and more generally for regular holonomic D-modules by Masaki Kashiwara (1980, 1984) and Zoghman Mebkhout (1980, 1984) independently. In the setting of nonabelian Hodge theory, the Riemann-Hilbert correspondence provides a complex analytic isomorphism between two of the three natural algebraic structures on the moduli spaces, and so is naturally viewed as a nonabelian analogue of the comparison isomorphism between De Rham cohomology and singular/Betti cohomology. Statement Suppose that X is a smooth complex algebraic variety. Riemann–Hilbert correspondence (for regular singular connections): there is a functor Sol called the local solutions functor, that is an equivalence from the category of flat connections on algebraic vector bundles on X with regular singularities to the category of local systems of finite-dimensional complex vector spaces on X. For X connected, the category of local systems is also equivalent to the category of complex representations of the fun
https://en.wikipedia.org/wiki/Cross-resistance
Cross-resistance is when something develops resistance to several substances that have a similar mechanism of action. For example, if a certain type of bacteria develops resistance to one antibiotic, that bacteria will also have resistance to several other antibiotics that target the same protein or use the same route to get into the bacterium. A real example of cross-resistance occurred for nalidixic acid and ciprofloxacin, which are both quinolone antibiotics. When bacteria developed resistance to ciprofloxacin, they also developed resistance to nalidixic acid because both drugs inhibit topoisomerase, a key enzyme in DNA replication. Due to cross-resistance, antimicrobial treatments like phage therapy can quickly lose their efficacy against bacteria. This makes cross-resistance an important consideration in designing evolutionary therapies. Definition Cross-resistance is the idea is that the development of resistance to one substance subsequently leads to resistance to one or more substances that can be resisted in a similar manner. It occurs when resistance is provided against multiple compounds through one single mechanism, like an efflux pump. Which can keep concentrations of a toxic substance at low levels and can do so for multiple compounds. Increasing the activity of such a mechanism in response to one compound then also has a similar effect on the others. The precise definition of cross-resistance depends on the field of interest. Pest management In pest management, cross-resistance is defined as the development of resistance by pest populations to multiple pesticides within a chemical family. Similar to the case of microbes, this may occur due to sharing binding target sites. One such example occurs in the case of cadherin mutations may result in cross resistance in H. armigera to Cry1Aa and Cry1Ab. There also exists multiple resistance in which resistance to multiple pesticides occurs via different resistances mechanisms as opposed to the same mechan
https://en.wikipedia.org/wiki/Xunlei
Xunlei Limited () is a Chinese multinational technology company and an online service provider founded in 2003. The subsidiary of Xunlei Limited, Shenzhen Xunlei Networking Technologies, Co., Ltd. () was formerly known as Sandai Technologies (Shenzhen) Inc. and changed its name to Shenzhen Xunlei Networking Technologies, Co., Ltd. in May 2005. Its headquarters are in Nanshan District, Shenzhen. In April 2014, Xunlei received an investment from a Chinese electronics company Xiaomi of $200 million. On 24 June 2014, it went public on the Nasdaq Stock Exchange, selling 7.315 million American depositary shares (ADS) at $12 and raising just shy of $88 million. According to the annual ranking of China's top 100 internet companies released by Ministry of Industry and Information Technology of the Chinese government, Xunlei occupied 42nd place in 2017's ranking. The main products developed by Xunlei Limited is the Xunlei download manager and Peer-to-peer software, supporting HTTP, FTP, eDonkey, and BitTorrent protocols. , it was the most commonly used BitTorrent client in the world. In October 2017, the company announced that it will transform itself into a blockchain company, and release a blockchain-based product named OneThing Cloud. OneThing Cloud users get LinkToken (a type of virtual token) for contributing their bandwidth to the Xunlei's content delivery network. Xunlei Ltd. announced that its board of directors has appointed Lei Chen, who is a former Tencent cloud computing unit leader, as its Chief Executive Officer of the Company and Director of the Board on June 29, 2017. On December 12, 2017, Xunlei announced that the board of directors of the Company has elected Mr. Chuan Wang as the Chairman of Board of Directors of the Company. Wang has been a director of Xunlei since March 2014. Wang is a co-founder of Xiaomi Inc., where he has served as its vice president since 2012. He is also the founder of Beijing Duokan Technology Co., Ltd., where he has served as its
https://en.wikipedia.org/wiki/Fed-batch%20culture
Fed-batch culture is, in the broadest sense, defined as an operational technique in biotechnological processes where one or more nutrients (substrates) are fed (supplied) to the bioreactor during cultivation and in which the product(s) remain in the bioreactor until the end of the run. An alternative description of the method is that of a culture in which "a base medium supports initial cell culture and a feed medium is added to prevent nutrient depletion". It is also a type of semi-batch culture. In some cases, all the nutrients are fed into the bioreactor. The advantage of the fed-batch culture is that one can control concentration of fed-substrate in the culture liquid at arbitrarily desired levels (in many cases, at low levels). Generally speaking, fed-batch culture is superior to conventional batch culture when controlling concentrations of a nutrient (or nutrients) affects the yield or productivity of the desired metabolite. Types of bioprocesses The types of bioprocesses for which fed-batch culture is effective can be summarized as follows: 1. Substrate inhibition[1] Nutrients such as methanol, ethanol, acetic acid, and aromatic compounds inhibit the growth of microorganisms even at relatively low concentrations. By adding such substrates properly lag-time can be shortened and the inhibition of the cell growth markedly reduced. 2. High cell density (High cell concentration)[1] In a batch culture, to achieve very high cell concentrations, e.g. 50-100 g of dry cells/L, high initial concentrations of the nutrients in the medium are needed. At such high concentrations, the nutrients become inhibitory, even though they have no such effect at the normal concentrations used in batch cultures. 3. Glucose effect (Crabtree effect)[1] In the production of baker's yeast from malt wort or molasses it has been recognized since early 1900s that ethanol is produced even in the presence of sufficient dissolved oxygen (DO) if an excess of sugar is present in the cu
https://en.wikipedia.org/wiki/Comparison%20microscope
A comparison microscope is a device used to analyze side-by-side specimens. It consists of two microscopes connected by an optical bridge, which results in a split view window enabling two separate objects to be viewed simultaneously. This avoids the observer having to rely on memory when comparing two objects under a conventional microscope. History One of the first prototypes of a comparison microscope was developed in 1913 in Germany. In 1929, using a comparison microscope adapted for forensic ballistics, Calvin Goddard and his partner Phillip Gravelle were able to absolve the Chicago Police Department of participation in the St. Valentine's Day Massacre. Col. Calvin H. Goddard Philip O. Gravelle, a chemist, developed a comparison microscope for use in the identification of fired bullets and cartridge cases with the support and guidance of forensic ballistics pioneer Calvin Goddard. It was a significant advance in the science of firearms identification in forensic science. The firearm from which a bullet or cartridge case has been fired is identified by the comparison of the unique striae left on the bullet or cartridge case from the worn, machined metal of the barrel, breach block, extractor, or firing pin in the gun. It was Gravelle who mistrusted his memory. "As long as he could inspect only one bullet at a time with his microscope, and had to keep the picture of it in his memory until he placed the comparison bullet under the microscope, scientific precision could not be attained. He therefore developed the comparison microscope and Goddard made it work." Calvin Goddard perfected the comparison microscope and subsequently popularized its use. Sir Sydney Smith also appreciated the idea, emphasizing its importance in forensic science and firearms identification. He took the comparison microscope to Scotland and introduced it to the European scientists for firearms identification and other forensic science needs. Modern comparison microscope The modern inst
https://en.wikipedia.org/wiki/Month%20of%20bugs
A month of bugs is a strategy used by security researchers to draw attention to the lax security procedures of commercial software corporations. Researchers have started such a project for software products where they believe corporations have shown themselves to be unresponsive and uncooperative to security alerts. Responsible disclosure is not working properly, and then find and disclose one security vulnerability each day for one month. Examples The original "Month of Bugs" was the Month of Browser Bugs (MoBB) run by security researcher H. D. Moore. Subsequent similar projects include: The Month of Kernel Bugs (MoKB) which published kernel bugs for Mac OS X (now macOS), Linux, FreeBSD, Solaris and Windows, as well as four wireless driver bugs. The Month of Apple Bugs (MoAB) conducted by researchers Kevin Finisterre and LMH which published bugs related to Mac OS X. The Month of PHP Bugs sponsored by the Hardened PHP team which published 44 PHP bugs. See also Fuzz testing Metasploit Project Vulnerability (computing)
https://en.wikipedia.org/wiki/Important%20ecological%20areas
Important ecological areas (IEAs) are habitat areas which, either by themselves or in a network, contribute significantly to an ecosystem’s productivity, biodiversity, and resilience. Appropriate management of key ecological features delineates the management boundaries of an IEA. The identification and protection of IEAs is an element of an ecosystem-based management approach. Important ecological areas may have varying levels of management of extractive activities, from monitoring up to and including marine reserve. IEAs have management measures tailored to the ecological features within the area with consideration of socioeconomic factors. Whereas marine reserves generally have a fixed management policy of no extraction or ‘no-take’. Nonetheless, a marine reserve may be the appropriate management policy for an IEA. The identification and management of IEAs is a form of ocean zoning. In the event that there are a series of linked IEAs within a large marine ecosystem, a collective action to manage the network, such as a marine sanctuary or national monument, may be warranted. Examples are tropical rainforests, oceans, forests, etc.
https://en.wikipedia.org/wiki/Allium%20moly
Allium moly, also known as yellow garlic, golden garlic and lily leek, Is a species of flowering plant in the genus Allium, which also includes the flowering and culinary onions and garlic. A bulbous herbaceous perennial from the Mediterranean, it is edible and also used as a medicinal and ornamental plant. Occurrence and appearance Allium moly is primarily found in Spain and Southern France with additional populations in Italy, Austria, Czech Republic, Algeria, and Morocco. With lance-shaped grey-green leaves up to 30 cm long, in early summer it produces masses of star-shaped bright yellow flowers in dense umbels. The cultivar ‘Jeannine’ has gained the Royal Horticultural Society’s Award of Garden Merit. Variants formerly included Allium moly var. ambiguum, now called Allium roseum Allium moly subsp. massaessylum, now called Allium massaessylum Allium moly var. stamineum, now called Allium stamineum Allium moly var. xericiense, now called Allium scorzonerifolium See also Moly (herb), mentioned in The Odyssey, from which Linnaeus took the species' name
https://en.wikipedia.org/wiki/Interferon%20alpha-n3
Interferon alpha-n3 (Alferon-N) is a medication consisting of purified natural human interferon alpha proteins used for the treatment of genital warts.
https://en.wikipedia.org/wiki/Mannitol%20salt%20agar
Mannitol salt agar or MSA is a commonly used selective and differential growth medium in microbiology. It encourages the growth of a group of certain bacteria while inhibiting the growth of others. It contains a high concentration (about 7.5–10%) of salt (NaCl) which is inhibitory to most bacteria - making MSA selective against most Gram-negative and selective for some Gram-positive bacteria (Staphylococcus, Enterococcus and Micrococcaceae) that tolerate high salt concentrations. It is also a differential medium for mannitol-fermenting staphylococci, containing the sugar alcohol mannitol and the indicator phenol red, a pH indicator for detecting acid produced by mannitol-fermenting staphylococci. Staphylococcus aureus produces yellow colonies with yellow zones, whereas other coagulase-negative staphylococci produce small pink or red colonies with no colour change to the medium. If an organism can ferment mannitol, an acidic byproduct is formed that causes the phenol red in the agar to turn yellow. It is used for the selective isolation of presumptive pathogenic (pp) Staphylococcus species. Expected results Gram + Staphylococcus: fermenting mannitol: medium turns yellow (e.g. S. aureus) Gram + Staphylococcus: not fermenting mannitol, medium does not change color (e.g. S. epidermidis) Gram + Streptococcus: inhibited growth Gram -: inhibited growth Typical composition MSA typically contains: 5.0 g/L enzymatic digest of casein 5.0 g/L enzymatic digest of animal tissue 1.0 g/L beef extract 10.0 g/L D-mannitol 75.0 g/L sodium chloride 0.025 g/L phenol red 15.0 g/L agar pH 7.4 ± 0.2 at 25 °C
https://en.wikipedia.org/wiki/141st%20meridian%20east
The 141st meridian east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Asia, the Pacific Ocean, Australasia, the Indian Ocean, the Southern Ocean, and Antarctica to the South Pole. The 141st meridian east forms a great circle with the 39th meridian west. As a border On the island of New Guinea, the meridian defines part of the land border between Indonesia on the west and Papua New Guinea on the east. The Fly River forms the border where it flows west of the 141st meridian. South of the Fly, the border runs slightly to the east of, and parallel to, the meridian (see Indonesia–Papua New Guinea border). In Australia, it forms the eastern boundary of the state of South Australia, bordering Queensland and New South Wales. The border between South Australia and Victoria was originally proclaimed to be exactly on the 141st meridian, but measurement errors resulted in the present border being about west of this line at 140°57'45" (see South Australia–Victoria border dispute). From Pole to Pole Starting at the North Pole and heading south to the South Pole, the 141st meridian east passes through: {| class="wikitable plainrowheaders" ! scope="col" width="130" | Co-ordinates ! scope="col" | Country, territory or sea ! scope="col" | Notes |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arctic Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | | Sakha Republic — Kotelny Island, New Siberian Islands |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | East Siberian Sea | style="background:#b0e0e6;" | Sannikov Strait |- | ! scope="row" | | Sakha Republic — Little Lyakhovsky Island, New Siberian Islands |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | East Siberian Sea | style="background:#b0e0e6;" | Sannikov Strait |- | ! scope="row" | | Sakha Republic — Great Lyakhovsky Island, New Siberian Islands |- | style="background:#
https://en.wikipedia.org/wiki/Associazione%20Luca%20Coscioni
The Luca Coscioni Association for the freedom of scientific research was founded on September 20, 2002 by Luca Coscioni, who had amyotrophic lateral sclerosis and was a member of the Italian Radical Party who promoted the campaign for the freedom of scientific research on embryonic stem cells. Background and history In 2001 he was the head of the list of Radical candidates in the political elections and received the support of 51 Nobel Prize Laureates worldwide. In 2005, the Association launched the pro-referendum campaign aimed at abrogating Law 40 which imposed a ban on assisted fertilization and on research on embryonic stem cells, by collecting the signatures of over a million voters. Although the referendum query obtained a majority of votes, it was cancelled for not obtaining the required quorum pursuant to the pro-abstention campaign promoted by the Italian Catholic Church and by many political parties. The Association is also promoter of the World Congress for Freedom of Scientific Research, an international platform of scientists, patients and citizens. In 2006, it played a leading role in the legal and political battle in favour of euthanasia and of the principle of a “Biological Testament” (Living Will) promoted by Piergiorgio Welby, the President of the Association, who suffered from progressive muscular dystrophy. After having addressed an open letter to the Italian President of the Republic and having filed a petition with the competent Court to respect his will to die, Piergiorgio Welby died on December 20, 2006 after having received sedation and being disconnected from mechanical pulmonary ventilation thanks to an act of civil disobedience organized by the Luca Coscioni Association. See also Italian Radical Party Nonviolent Radical Party Marco Pannella Luca Coscioni Academic freedom euthanasia civil disobedience External links Associazione Luca Coscioni World Congress for Freedom of Scientific Research Genetics organizations Politi
https://en.wikipedia.org/wiki/Astrobotany
Astrobotany is an applied sub-discipline of botany that is the study of plants in space environments. It is a branch of astrobiology and botany. Astrobotany concerns both the study of extraterrestrial vegetation discovery, as well as research into the growth of terrestrial vegetation in outer space by humans. It has been a subject of study that plants may be grown in outer space typically in a weightless but pressurized controlled environment in specific space gardens. In the context of human spaceflight, they can be consumed as food and/or provide a refreshing atmosphere. Plants can metabolize carbon dioxide in the air to produce valuable oxygen, and can help control cabin humidity. Growing plants in space may provide a psychological benefit to human spaceflight crews. The first challenge in growing plants in space is how to get plants to grow without gravity. This runs into difficulties regarding the effects of gravity on root development, providing appropriate types of lighting, and other challenges. In particular, the nutrient supply to root as well as the nutrient biogeochemical cycles, and the microbiological interactions in soil-based substrates are particularly complex, but have been shown to make possible space farming in hypo- and micro-gravity. NASA plans to grow plants in space to help feed astronauts, and to provide psychological benefits for long-term space flight. Extraterrestrial vegetation Vegetation red edge The vegetation red edge (VRE) is a biosignature of near-infrared wavelengths that is observable through telescopic observation of Earth, and has increased in strength as evolution has made vegetative life more complex. On Earth, this phenomenon has been detected through analysis of planetshine on the Moon, which can show a reflection spectrum that spikes at 700 nm. In an article published in Nature in 1990, Sagan et al. described Galileo's detection of infrared light radiating from Earth as evidence of "widespread biological activity" o
https://en.wikipedia.org/wiki/Bethe%E2%80%93Salpeter%20equation
The Bethe–Salpeter equation (named after Hans Bethe and Edwin Salpeter) describes the bound states of a two-body (particles) quantum field theoretical system in a relativistically covariant formalism. The equation was first published in 1950 at the end of a paper by Yoichiro Nambu, but without derivation. Due to its generality and its application in many branches of theoretical physics, the Bethe–Salpeter equation appears in many different forms. One form, that is quite often used in high energy physics is where Γ is the Bethe–Salpeter amplitude, K the interaction and S the propagators of the two participating particles. In quantum theory, bound states are objects with lifetimes that are much longer than the time-scale of the interaction ruling their structure (otherwise they are called resonances). Thus the constituents interact essentially infinitely many times. By summing up, infinitely many times, all possible interactions that can occur between the two constituents, the Bethe–Salpeter equation is a tool to calculate properties of bound states. Its solution, the Bethe–Salpeter amplitude, is a description of the bound state under consideration. As it can be derived via identifying bound-states with poles in the S-matrix, it can be connected to the quantum theoretical description of scattering processes and Green's functions. The Bethe–Salpeter equation is a general quantum field theoretical tool, thus applications for it can be found in any quantum field theory. Some examples are positronium (bound state of an electron–positron pair), excitons (bound states of an electron–hole pairs), and mesons (as quark-antiquark bound states). Even for simple systems such as the positronium, the equation cannot be solved exactly, although in principle it can be formulated exactly. A classification of the states can be achieved without the need for an exact solution. If one of the particles is significantly more massive than the other, the problem is considerably simplif
https://en.wikipedia.org/wiki/Maximal%20set
In recursion theory, the mathematical theory of computability, a maximal set is a coinfinite recursively enumerable subset A of the natural numbers such that for every further recursively enumerable subset B of the natural numbers, either B is cofinite or B is a finite variant of A or B is not a superset of A. This gives an easy definition within the lattice of the recursively enumerable sets. Maximal sets have many interesting properties: they are simple, hypersimple, hyperhypersimple and r-maximal; the latter property says that every recursive set R contains either only finitely many elements of the complement of A or almost all elements of the complement of A. There are r-maximal sets that are not maximal; some of them do even not have maximal supersets. Myhill (1956) asked whether maximal sets exist and Friedberg (1958) constructed one. Soare (1974) showed that the maximal sets form an orbit with respect to automorphism of the recursively enumerable sets under inclusion (modulo finite sets). On the one hand, every automorphism maps a maximal set A to another maximal set B; on the other hand, for every two maximal sets A, B there is an automorphism of the recursively enumerable sets such that A is mapped to B.
https://en.wikipedia.org/wiki/Western%20Palaearctic
The Western Palaearctic or Western Palearctic is part of the Palaearctic realm, one of the eight biogeographic realms dividing the Earth's surface. Because of its size, the Palaearctic is often divided for convenience into two, with Europe, North Africa, northern and central parts of the Arabian Peninsula, and part of temperate Asia, roughly to the Ural Mountains forming the western zone, and the rest of temperate Asia becoming the Eastern Palaearctic. Its exact boundaries differ depending on the authority in question, but the Handbook of the Birds of Europe, the Middle East, and North Africa: The Birds of the Western Palearctic (BWP) definition is widely used, and is followed by the most popular Western Palearctic checklist, that of the Association of European Rarities Committees (AERC). The Western Palearctic realm includes mostly boreal and temperate climate ecoregions. The Palaearctic region has been recognised as a natural zoogeographic region since Sclater proposed it in 1858. The oceans to the north and west, and the Sahara to the south are obvious natural boundaries with other realms, but the eastern boundary is more arbitrary, since it merges into another part of the same realm, and the mountain ranges used as markers are less effective biogeographic separators. The climate differences across the Western Palearctic region can cause behavioral differences within the same species across geographical distance, such as in the sociality of behavior for bees of the species Lasioglossum malachurum.
https://en.wikipedia.org/wiki/List%20of%20Linux-supported%20computer%20architectures
The basic components of the Linux family of operating systems, which are based on the Linux kernel, the GNU C Library, BusyBox or forks thereof like μClinux and uClibc, have been programmed with a certain level of abstraction in mind. Also, there are distinct code paths in the assembly language or C source code which support certain hardware. Therefore, the source code can be successfully compiled onor cross-compiled fora great number of computer architectures. Furthermore, the required free and open-source software has also been developed to interface between Linux and the hardware Linux is to be executed on. For example, compilers are available, e.g. GNU Compiler Collection (GCC) and LLVM/Clang. For cross-compilation a number of complete toolchains are available, like GNU toolchain, OpenWrt Buildroot or OpenEmbedded. The Yocto Project is targeted at embedded use cases. The portability section of the Linux kernel article contains information and references to technical details. Note that further components like a display server, or programs like Blender, can be present or absent. Fundamentally any software has to be ported, i.e. specifically adapted, to any kind of hardware it is supposed to be executed on. The level of abstraction that has been kept in mind while programming that software in the first place dictates the necessary effort. The relevant term is of the porting target is computer architecture; it comprises the instruction set(s) and the microarchitecture(s) of the processor(s), at least of the CPU. The target also comprises the "system design" of the entire system, be it a supercomputer, a desktop computer or some SoC, e.g. in case some unique bus is being used. In former times, the memory controller was part of the chipset on the motherboard and not on the CPU-die. Although the support of a specific instruction set is the task of the compiler, the software must be written with a certain level of abstraction in mind to make this portability pos
https://en.wikipedia.org/wiki/No%20Fem%20el%20CIM
The "No fem el CIM" (NFC) movement was founded in 2003 in the Penedès region of Catalonia. It was then that locals learned that the Catalan government wanted to construct a dry port between the towns of Banyeres del Penedès, l'Arboç i Sant Jaume dels Domenys. However, it was not until August 2005 that the movement began to take-off, gaining national recognition, spurred by the release of the government's 544 acre (220 hectare) plan for the inland port. Since then, members of the movement have held numerous protests as well as meetings with local and national government entities in an attempt to prevent dry port construction. Support At the end of 2008, the groups that had come out in support of the NFC movement included: 1 Mayor Board of Baix Penedès 2 County Boards and Commissions: Baix Penedès, Alt Penedès 14 City Councils: l'Arboç, Banyeres del Penedès, Bellvei, la Bisbal del Penedès, la Granada, Llorenç del Penedès, Sant Jaume dels Domenys, Torrelavit, Olesa de Bonesvalls, Sant Pere de Riudebitlles, Cunit, El Vendrell, Vilafranca del Penedès i Vilanova i la Geltrú. 5 Business Associations 5 Labor Unions 7 Environmental Groups 9 Foundations and Associations 54 Cultural Groups various national and local political groups 5.385 Catalan residents 1.120 allegation to the PTPCT (planification of Tarragona Area) Structure The "No fem el CIM" movement is a grassroots movement that operates by means of five autonomous committees: The NFC Communication Committee The NFC Publicity Committee The NFC Fundraising Committee The NFC Political Action Committee The NFC Legal Action Committee External links NoFemelCIM Web NoFemelCIM Blog NoFemelCIM Youtube NoFemelCIM Facebook NoFemelCIM Twitter NoFemelCIM Wikipedia NoFemelCIM Flickr NoFemelCIM Vimeo NoFemelCIM Rss Organisations based in Catalonia Baix Penedès Alt Penedès Ecology organizations
https://en.wikipedia.org/wiki/Hypersonic%20effect
The hypersonic effect is a phenomenon reported in a controversial scientific study by Tsutomu Oohashi et al., which claims that, although humans cannot consciously hear ultrasound (sounds at frequencies above approximately 20 kHz), the presence or absence of those frequencies has a measurable effect on their physiological and psychological reactions. Numerous other studies have contradicted the portion of the results relating to the subjective reaction to high-frequency audio, finding that people who have "good ears" listening to Super Audio CDs and high resolution DVD-Audio recordings on high fidelity systems capable of reproducing sounds up to 30 kHz cannot tell the difference between high resolution audio and the normal CD sampling rate of 44.1 kHz. Favoring evidence In research published in 2000 in the Journal of Neurophysiology, researchers described a series of objective and subjective experiments in which subjects were played music, sometimes containing high-frequency components (HFCs) above 25 kHz and sometimes not. The subjects could not consciously tell the difference, but when played music with the HFCs they showed differences measured in two ways: EEG monitoring of their brain activity showed statistically significant enhancement in alpha-wave activity The subjects preferred the music with the HFCs No effect was detected on listeners in the study when only the ultrasonic (frequencies higher than 24 kHz) portion of the test material was played for test subjects; the demonstrated effect was only present when comparing full-bandwidth to bandwidth-limited material. It is a common understanding in psychoacoustics that the ear cannot respond to sounds at such high frequency via an air-conduction pathway, so one question that this research raised was: does the hypersonic effect occur via the "ordinary" route of sound travelling through the air passage in the ear, or in some other way? A peer-reviewed study in 2006 seemed to confirm the second of these
https://en.wikipedia.org/wiki/Slater%27s%20rules
In quantum chemistry, Slater's rules provide numerical values for the effective nuclear charge in a many-electron atom. Each electron is said to experience less than the actual nuclear charge, because of shielding or screening by the other electrons. For each electron in an atom, Slater's rules provide a value for the screening constant, denoted by s, S, or σ, which relates the effective and actual nuclear charges as The rules were devised semi-empirically by John C. Slater and published in 1930. Revised values of screening constants based on computations of atomic structure by the Hartree–Fock method were obtained by Enrico Clementi et al. in the 1960s. Rules Firstly, the electrons are arranged into a sequence of groups in order of increasing principal quantum number n, and for equal n in order of increasing azimuthal quantum number l, except that s- and p- orbitals are kept together. [1s] [2s,2p] [3s,3p] [3d] [4s,4p] [4d] [4f] [5s, 5p] [5d] etc. Each group is given a different shielding constant which depends upon the number and types of electrons in those groups preceding it. The shielding constant for each group is formed as the sum of the following contributions: An amount of 0.35 from each other electron within the same group except for the [1s] group, where the other electron contributes only 0.30. If the group is of the [ns, np] type, an amount of 0.85 from each electron with principal quantum number (n–1), and an amount of 1.00 for each electron with principal quantum number (n–2) or less. If the group is of the [d] or [f], type, an amount of 1.00 for each electron "closer" to the nucleus than the group. This includes both i) electrons with a smaller principal quantum number than n and ii) electrons with principal quantum number n and a smaller azimuthal quantum number l. In tabular form, the rules are summarized as: Example An example provided in Slater's original paper is for the iron atom which has nuclear charge 26 and electronic configuration
https://en.wikipedia.org/wiki/Mar%C3%ADa%20Trinidad%20S%C3%A1nchez
María Trinidad Sánchez, Mother Founder (16 May 1794, Santo Domingo- 27 February 1845, Santo Domingo) was a Dominican freedom fighter and a heroine of the Dominican War of Independence. She participated on the rebel side as a courier. Together with Concepción Bona, Isabel Sosa and María de Jesús Pina, she took part in designing the Dominican flag. She was executed after having refused to betray her collaborators in exchange for her life. The María Trinidad Sánchez Province is named after her. Her remains rest in the National Pantheon of the Dominican Republic in Santo Domingo. Biography Family origins She was born on June 16, 1794, in the city of Santo Domingo. She was the only daughter of Isidora Ramona and Fernando Raimundo Sánchez, both descendants of slaves. She was baptised at 14 days old. She had 4 brothers, Francisco, Narciso, Dionsio, and José. Se lived in a humble hut, made of palm boards, located on La Luna Street (today Sánchez), in an area occupied by the poor. She was considered one of the best seamstresses in the city. She was like a second mother to her nieces and nephews and is recognized as having been a key figure in the initial education of Francisco del Rosario Sánchez, her nephew and one of the fathers of the country. In the opinion of historian Roberto Cassá, Sánchez, who had slave ancestors, "showed a personality consistent with the stereotypes of the time.” Regarding María T. Sánchez, the author Ramón Lugo Lovatón assured that she was a friend of sententious phrases and strange anecdotes. She was also characterized by her marked religiosity and she was considered a saint, who wore a virgin's habit and performed penances. She was part of a community in the Carmen parish. See also Francisco del Rosario Sánchez, nephew Socorro Sánchez del Rosario, niece
https://en.wikipedia.org/wiki/CD4%2B%20T%20cells%20and%20antitumor%20immunity
Understanding of the antitumor immunity role of CD4+ T cells has grown substantially since the late 1990s. CD4+ T cells (mature T-helper cells) play an important role in modulating immune responses to pathogens and tumor cells, and are important in orchestrating overall immune responses. Immunosurveillance and immunoediting This discovery furthered the development of a previously hypothesized theory, the immunosurveillance theory. The immunosurveillance theory suggests that the immune system routinely patrols the cells of the body, and, upon recognition of a cell, or group of cells, that has become cancerous, it will attempt to destroy them, thus preventing the growth of some tumors. (Burnet, 1970) More recent evidence has suggested that immunosurveillance is only part of a larger role the immune system plays in fighting cancer. Remodeling of this theory has led to the progression of the immunoediting theory, in which there are 3 phases, Elimination, Equilibrium and Escape. Elimination phase As mentioned, the elimination phase is synonymous with the classic immunosurveillance theory. In 2001, it was shown that mice deficient in RAG-2 (Recombinase Activator Gene 2) were far less capable of preventing MCA induced tumours than were wild type mice. (Shankaran et al., 2001, Bui and Schreiber, 2007) RAG proteins are necessary for the recombination events necessary to produce TCRs and Igs, and as such RAG-2 deficient mice are incapable of producing functional T, B or NK cells. RAG-2 deficient mice were chosen over other methods of inducting immunodeficiency (such as SCID mice) as an absence of these proteins does not affect DNA repair mechanisms, which becomes important when dealing with cancer, as DNA repair problems can lead to cancers themselves. This experiment provides clear evidence that the immune system does, in fact, play a role in eradication of tumor cells. Further knock out experiments showed important roles of αβ T cells, γδ T cells and NK cells in tumour
https://en.wikipedia.org/wiki/Geometric%20Langlands%20correspondence
In mathematics, the geometric Langlands correspondence is a reformulation of the Langlands correspondence obtained by replacing the number fields appearing in the original number theoretic version by function fields and applying techniques from algebraic geometry. The geometric Langlands correspondence relates algebraic geometry and representation theory. The specific case of the geometric Langlands correspondence for general linear groups over function fields was proven by Laurent Lafforgue in 2002, where it follows as a consequence of Lafforgue's theorem. History In mathematics, the classical Langlands correspondence is a collection of results and conjectures relating number theory and representation theory. Formulated by Robert Langlands in the late 1960s, the Langlands correspondence is related to important conjectures in number theory such as the Taniyama–Shimura conjecture, which includes Fermat's Last Theorem as a special case. Establishing the Langlands correspondence in the number theoretic context has proven extremely difficult. As a result, some mathematicians have posed the geometric Langlands correspondence. Langlands correspondences can be formulated for global fields (as well as local fields), which are classified into number fields or global function fields. The classical Langlands correspondence is formulated for number fields. The geometric Langlands correspondence is instead formulated for global function fields, which in some sense have proven easier to deal with. In 2002, the geometric Langlands correspondence was proven for general linear groups over a function field by Laurent Lafforgue. Connection to physics In a paper from 2007, Anton Kapustin and Edward Witten described a connection between the geometric Langlands correspondence and S-duality, a property of certain quantum field theories. In 2018, when accepting the Abel Prize, Langlands delivered a paper reformulating the geometric program using tools similar to his original Langl
https://en.wikipedia.org/wiki/SWIV
SWIV is a vertically scrolling shooter released in 1991 for the Amiga, Atari ST, Commodore 64, MSX, ZX Spectrum, and Amstrad CPC computers. A Game Boy Color conversion was published in 2001. The game was considered a spiritual successor to Tecmo arcade game Silkworm, which The Sales Curve had previously converted to home computer formats in 1989. The game's heritage is evident from the game design whereby one player pilots a helicopter, and the other an armoured Jeep. SWIV is not an official sequel, as noted by ex-Sales Curve producer Dan Marchant: "SWIV wasn't really a sequel to Silkworm, but it was certainly inspired by it and several other shoot-'em-ups that we had played and loved." In the game's Amiga manual, however, it was explained that "SWIV" was both an acronym for "Special Weapons Interdiction Vehicle" and also short for "Silkworm IV" (even though there was not a Silkworm II or III). Gameplay SWIV is a 2D vertically scrolling shooter. The player chooses between using either a helicopter or a jeep at the beginning of the game and then plays in their chosen vehicles through scrolling levels, shooting at oncoming enemies. If two players are present, both vehicles will be used at once. Certain enemies when shot drop shield power-ups which can be either picked up to afford temporarily invincibility or detonated to destroy all enemies onscreen. Every so often a boss enemy will attack. The destruction of these bosses will give upgrades to the player's forward firing gun. Reception On release SWIV was met with positive reviews from most magazines of the time, receiving a 92% from Amiga Format magazine, an 88% from Commodore Format (C64 version) a 91% from Amiga Action, 90% from Computer and Video Games, and a 90% from Your Sinclair. It was ranked the 27th best game of all time by Amiga Power. Legacy A sequel was published for the Super Nintendo Entertainment System as Super SWIV. It was ported to the Mega Drive as Mega SWIV. In 1997 SWIV 3D was released,
https://en.wikipedia.org/wiki/Standard%20day
The term standard day is used throughout meteorology, aviation, and other sciences and disciplines as a way of defining certain properties of the atmosphere in a manner which allows those who use our atmosphere to effectively calculate and communicate its properties at any given time. For example, a temperature deviation of +8 °C means that the air at any given altitude is 8 °C (14 °F) warmer than what standard day conditions and the measurement altitude would predict, and would indicate a higher density altitude. These variations are extremely important to both meteorologists and aviators, as they strongly determine the different properties of the atmosphere. For example, on a cool day, an airliner might have no problem safely departing a medium length runway, but on a warmer day, the density altitude might be higher, require a higher ground speed and true airspeed prior to liftoff, which would require more acceleration, a longer runway, and a reduced climb rate after liftoff. The pilot may choose to reduce the gross weight of the aircraft by carrying less fuel or reducing the amount/weight of the cargo, or even reducing the number of passengers (usually the last option). Not carrying sufficient fuel to complete the flight to the destination, the pilot would plan an intermediate fuel stop, which would likely delay the final destination arrival time. In meteorology, departure from standard day conditions is what gives rise to all weather phenomena, including thunderstorms, fronts, clouds, even the heating and cooling of our planet. Standard day parameters For Pilots: At sea level, Altimeter:29.92 in/Hg at The "standard day" model of the atmosphere is defined at sea level, with certain present conditions such as temperature and pressure. But other factors, such as humidity, further alter the nature of the atmosphere, and are also defined under standard day conditions: Density (ρ): 1.225 kg/m3 (0.00237 slug/ft3) Pressure (p): 101.325 kPa (14.7 lb/ in2)
https://en.wikipedia.org/wiki/Institute%20of%20Acoustics%20%28United%20Kingdom%29
The Institute of Acoustics (IOA) is a British professional engineering institution founded in 1974. It is licensed by the Engineering Council UK to assess candidates for inclusion on ECUK's Register of professional Engineers. The institute's address is Silbury Court, 406 Silbury Boulevard, Milton Keynes MK9 2AF, United Kingdom. The current president of the IOA is Alistair Somerville. Past presidents include Barry Gibbs, John Hinton OBE, Colin English, David Weston, Tony Jones, Professor Trevor Cox, William Egan, Professor Bridget Shield, and Jo Webb. History In 1963 a Society of Acoustic Technology was formed in the UK for those interested in this subject: the President was Elfyn Richards. Because of the interest in establishing a professional body, meetings were held with various societies and institutions, and in 1965 a British Acoustical Society was set up, absorbing the earlier society. In 1974 the British Acoustical Society amalgamated with the Acoustics Group of the Institute of Physics to form the Institute of Acoustics. Specialist groups Building acoustics Electroacoustics Environmental noise Measurement and instrumentation Musical acoustics Noise and vibration engineering Physical acoustics Speech and hearing Underwater acoustics Medals and awards The following prizes are awarded by the Institute Rayleigh Medal Tyndall Medal A B Wood Medal R W B Stephens Medal IOA Engineering Medal Honorary fellowship Peter Barnett Memorial Award The Award for Promoting Acoustics to the Public Award for Services to the Institute IOA Young Persons' Award for Innovation in Acoustical Engineering IOA Prize for best diploma student ANC prize for the best diploma project ANC prize for the best paper at an IOA conference See also Chartered engineer Incorporated engineer The Association of Noise Consultants
https://en.wikipedia.org/wiki/Polar%20mutation
A polar mutation affects expression of downstream genes or operons. It can also affect the expression of the gene in which it occurs, if it occurs in a transcribed region. These mutations tend to occur early within the sequence of genes and can be nonsense, frameshift, or insertion mutations. Polar mutations are found only in organisms containing polycistronic mRNA.
https://en.wikipedia.org/wiki/Institute%20of%20Healthcare%20Engineering%20and%20Estate%20Management
The Institute of Healthcare Engineering and Estate Management (IHEEM) is the UK's largest specialist Institute for the Healthcare Estates Sector; devoted to developing careers, provision of education and training and registering engineers as Eng Tech, IEng and CEng. History The Institute was founded in 1943 and was originally named the Institute of Hospital Engineers; the Society of X-Ray Technology had merged with this in 1990. Structure It is headquartered in the Cumberland Business Centre in Portsmouth, on the A2030. IHEEM: is a not-for-profit company. Their primary purpose, as a professional development organisation, is to keep members up to date with developing technology and changing regulations within the industry is independent of government, the NHS and commercial interests and protects its impartiality and objectivity. IHEEM’s members comply with a Code of Professional Conduct that places a personal obligation to uphold the dignity and reputation of the profession and to safeguard public interest; each member undertakes to exercise all reasonable professional skill and care and to discharge this responsibility with integrity. The Institute counts among its members employees of both public and private healthcare providers, engineering and consultancy firms and practices. Increasingly members come from a non-engineering background, many with Facilities Management experience. See also Chartered engineer Incorporated engineer Engineering technician
https://en.wikipedia.org/wiki/Spectral%20element%20method
In the numerical solution of partial differential equations, a topic in mathematics, the spectral element method (SEM) is a formulation of the finite element method (FEM) that uses high degree piecewise polynomials as basis functions. The spectral element method was introduced in a 1984 paper by A. T. Patera. Although Patera is credited with development of the method, his work was a rediscovery of an existing method (see Development History) Discussion The spectral method expands the solution in trigonometric series, a chief advantage being that the resulting method is of a very high order. This approach relies on the fact that trigonometric polynomials are an orthonormal basis for . The spectral element method chooses instead a high degree piecewise polynomial basis functions, also achieving a very high order of accuracy. Such polynomials are usually orthogonal Chebyshev polynomials or very high order Lagrange polynomials over non-uniformly spaced nodes. In SEM computational error decreases exponentially as the order of approximating polynomial increases, therefore a fast convergence of solution to the exact solution is realized with fewer degrees of freedom of the structure in comparison with FEM. In structural health monitoring, FEM can be used for detecting large flaws in a structure, but as the size of the flaw is reduced there is a need to use a high-frequency wave. In order to simulate the propagation of a high-frequency wave, the FEM mesh required is very fine resulting in increased computational time. On the other hand, SEM provides good accuracy with fewer degrees of freedom. Non-uniformity of nodes helps to make the mass matrix diagonal, which saves time and memory and is also useful for adopting a central difference method (CDM). The disadvantages of SEM include difficulty in modeling complex geometry, compared to the flexibility of FEM. Although the method can be applied with a modal piecewise orthogonal polynomial basis, it is most often impl
https://en.wikipedia.org/wiki/Lift%20%28mathematics%29
In category theory, a branch of mathematics, given a morphism f: X → Y and a morphism g: Z → Y, a lift or lifting of f to Z is a morphism h: X → Z such that . We say that f factors through h. A basic example in topology is lifting a path in one topological space to a path in a covering space. For example, consider mapping opposite points on a sphere to the same point, a continuous map from the sphere covering the projective plane. A path in the projective plane is a continuous map from the unit interval [0,1]. We can lift such a path to the sphere by choosing one of the two sphere points mapping to the first point on the path, then maintain continuity. In this case, each of the two starting points forces a unique path on the sphere, the lift of the path in the projective plane. Thus in the category of topological spaces with continuous maps as morphisms, we have Lifts are ubiquitous; for example, the definition of fibrations (see Homotopy lifting property) and the valuative criteria of separated and proper maps of schemes are formulated in terms of existence and (in the last case) uniqueness of certain lifts. In algebraic topology and homological algebra, tensor product and the Hom functor are adjoint; however, they might not always lift to an exact sequence. This leads to the definition of the Ext functor and the Tor functor. Algebraic logic The notations of first-order predicate logic are streamlined when quantifiers are relegated to established domains and ranges of binary relations. Gunther Schmidt and Michael Winter have illustrated the method of lifting traditional logical expressions of topology to calculus of relations in their book Relational Topology. They aim "to lift concepts to a relational level making them point free as well as quantifier free, thus liberating them from the style of first order predicate logic and approaching the clarity of algebraic reasoning." For example, a partial function M corresponds to the inclusion where denotes t
https://en.wikipedia.org/wiki/Popper%27s%20experiment
Popper's experiment is an experiment proposed by the philosopher Karl Popper to test aspects of the uncertainty principle in quantum mechanics. History In fact, as early as 1934, Popper started criticising the increasingly more accepted Copenhagen interpretation, a popular subjectivist interpretation of quantum mechanics. Therefore, in his most famous book Logik der Forschung he proposed a first experiment alleged to empirically discriminate between the Copenhagen Interpretation and a realist ensemble interpretation, which he advocated. Einstein, however, wrote a letter to Popper about the experiment in which he raised some crucial objections and Popper himself declared that this first attempt was "a gross mistake for which I have been deeply sorry and ashamed of ever since". Popper, however, came back to the foundations of quantum mechanics from 1948, when he developed his criticism of determinism in both quantum and classical physics. As a matter of fact, Popper greatly intensified his research activities on the foundations of quantum mechanics throughout the 1950s and 1960s developing his interpretation of quantum mechanics in terms of real existing probabilities (propensities), also thanks to the support of a number of distinguished physicists (such as David Bohm). In 1980, Popper proposed perhaps his more important, yet overlooked, contribution to QM: a "new simplified version of the EPR experiment". The experiment was however published only two years later, in the third volume of the Postscript to the Logic of Scientific Discovery. The most widely known interpretation of quantum mechanics is the Copenhagen interpretation put forward by Niels Bohr and his school. It maintains that observations lead to a wavefunction collapse, thereby suggesting the counter-intuitive result that two well separated, non-interacting systems require action-at-a-distance. Popper argued that such non-locality conflicts with common sense, and would lead to a subjectivist interpr
https://en.wikipedia.org/wiki/Suppressor-inducer%20T%20cell
Suppressor-inducer T cells are a specific subset of CD4+ T helper cells that "induce" CD8+ cytotoxic T cells to become "suppressor" cells. Suppressor T cells are also known as CD25+–Foxp3+ regulatory T cells (nTregs), and reduce inflammation.
https://en.wikipedia.org/wiki/List%20of%20flags%20of%20Sweden
The following is a list of flags of Sweden. National flag and state flag Royal standards Military flags Flags of the Navy Historical flags Party flags Party flags Regional flags Each official flag is based on the coat of arms for the county, see gallery, and used on buildings etc. used by respective county administration. Unofficial flags are used by private and local people. Swedish municipals often use flags that simply are the actual coat of arms transferred into a flag. See List of municipalities of Sweden where you can see the arms and links to each municipal. Swedish shipping companies See also Coat of arms of Sweden Du gamla, Du fria Kungssången
https://en.wikipedia.org/wiki/Boston%20MXP
The Boston MXP was an Internet Exchange Point in Boston, Massachusetts and Quincy, Massachusetts. It was founded by MAI in 1996. It supports 10 megabit and 100 megabit connections on copper, and gigabit connections on fiber. Global NAPs, the main sponsor of the Boston MXP, was sold on February 29, 2012. The Boston MXP is no longer operational and was replaced by the Boston Internet Exchange. See also Internet Exchange Point External links Official Boston MXP website profile on PeeringDB Internet exchange points in the United States Network access Telecommunications in the United States Communications in Massachusetts
https://en.wikipedia.org/wiki/Iatrophysics
Iatrophysics or iatromechanics (fr. Greek) is the medical application of physics. It provides an explanation for medical practices with mechanical principles. It was a school of medicine in the seventeenth century which attempted to explain physiological phenomena in mechanical terms. Believers of iatromechanics thought that physiological phenomena of the human body followed the laws of physics. It was related to iatrochemistry in studying the human body in a systematic manner based on observations from the natural world though it had more emphasis on mathematical models rather than chemical processes. Background The Age of Enlightenment was an era of radically changing ways of thought in Western politics, philosophy, and science. Major sociological changes occurred in the Enlightenment, as well as industrial and scientific. In medicine, the Enlightenment brought several discoveries and studies that were impacted by changing ways of thought. For example, capillaries were discovered by Marcello Malpighi. Jan Baptist van Helmont (1580–1644) was the first to consider digestion a fermentation process and identified hydrochloric acid in the stomach. Pathological anatomy and clinical observation were also being integrated into the medical curriculum. The Enlightenment also directly influenced the field of iatrophysics through the development of Antonie von Leeuwenhoek's microscope, the advancement of the field of ophthalmology through the use of physics by René Descartes, and Newton's law of universal gravitation, idea of gravitational force, and his treatise Opticks. Subfields Iatrophysicists drew inspiration from various established physical phenomena in order to explain how certain biological processes took place and how this can be applied towards medicine. Particles A key component of iatrophysical anatomy was the study of particles. This was particularly influenced by 17th century developments in microbiology, the most prominent being the microscope. Antonie v
https://en.wikipedia.org/wiki/EMBnet
The European Molecular Biology network (EMBnet) is an international scientific network and interest group that aims to enhance bioinformatics services by bringing together bioinformatics expertises and capacities. On 2011 EMBnet has 37 nodes spread over 32 countries. The nodes include bioinformatics related university departments, research institutes and national service providers. Operations The main task of most EMBnet nodes is to provide their national scientific community with access to bioinformatics databanks, specialised software and sufficient computing resources and expertise. EMBnet is also working in the fields of bioinformatics training and software development. Examples of software created by EMBnet members are: EMBOSS, wEMBOSS, UTOPIA. EMBnet represents a wide user group and works closely together with the database producers such as EMBL's European Bioinformatics Institute (EBI), the Swiss Institute of Bioinformatics (Swiss-Prot), the Munich Information Center for Protein Sequences (MIPS), in order to provide a uniform coverage of services throughout Europe. EMBnet is registered in the Netherlands as a public foundation (Stichting). Since its creation in 1988, EMBnet has evolved from an informal network of individuals in charge of maintaining biological databases into the only worldwide organization bringing bioinformatics professionals to work together to serve the expanding fields of genetics and molecular biology. Although composed predominantly of academic nodes, EMBnet gains an important added dimension from its industrial members. The success of EMBnet is attracting increasing numbers of organizations outside Europe to join. EMBnet has a tried-and-tested infrastructure to organise training courses, give technical help and help its members effectively interact and respond to the rapidly changing needs of biological research in a way no single institute is able to do. In 2005 the organization created additional types of node to allow more tha
https://en.wikipedia.org/wiki/Detrended%20fluctuation%20analysis
In stochastic processes, chaos theory and time series analysis, detrended fluctuation analysis (DFA) is a method for determining the statistical self-affinity of a signal. It is useful for analysing time series that appear to be long-memory processes (diverging correlation time, e.g. power-law decaying autocorrelation function) or 1/f noise. The obtained exponent is similar to the Hurst exponent, except that DFA may also be applied to signals whose underlying statistics (such as mean and variance) or dynamics are non-stationary (changing with time). It is related to measures based upon spectral techniques such as autocorrelation and Fourier transform. Peng et al. introduced DFA in 1994 in a paper that has been cited over 3,000 times as of 2022 and represents an extension of the (ordinary) fluctuation analysis (FA), which is affected by non-stationarities. Definition Algorithm Given: a time series . Compute its average value . Sum it into a process . This is the cumulative sum, or profile, of the original time series. For example, the profile of an i.i.d. white noise is a standard random walk. Select a set of integers, such that , the smallest , the largest , and the sequence is roughly distributed evenly in log-scale: . In other words, it is approximately a geometric progression. For each , divide the sequence into consecutive segments of length . Within each segment, compute the least squares straight-line fit (the local trend). Let be the resulting piecewise-linear fit. Compute the root-mean-square deviation from the local trend (local fluctuation):And their root-mean-square is the total fluctuation: (If is not divisible by , then one can either discard the remainder of the sequence, or repeat the procedure on the reversed sequence, then take their root-mean-square.) Make the log-log plot . Interpretation A straight line of slope on the log-log plot indicates a statistical self-affinity of form . Since monotonically increases with , we always h
https://en.wikipedia.org/wiki/Direct%20instruction
Direct instruction (DI) is the explicit teaching of a skill-set using lectures or demonstrations of the material to students. A particular subset, denoted by capitalization as Direct Instruction, refers to the approach developed by Siegfried Engelmann and Wesley C. Becker that was first implemented in the 1960s. DI teaches by explicit instruction, in contrast to exploratory models such as inquiry-based learning. DI includes tutorials, participatory laboratory classes, discussion, recitation, seminars, workshops, observation, active learning, practicum, or internships. Model includes "I do" (instructor), "We do" (instructor and student/s), "You do" (student practices on their own with instructor monitoring). DI relies on a systematic and scripted curriculum, delivered by highly trained instructors. On the premise that all students can learn and all teachers successfully teach if given effective training in specific techniques, teachers may be evaluated based on measurable student learning. In some special education programs, direct instruction is used in resource rooms, when teachers assist with homework completion and academic remediation. History DISTAR was a specific direct instruction model developed by Siegfried Engelmann and Wesley C. Becker. Engelmann and Becker sought to identify teaching methods that would accelerate the progress of historically disadvantaged elementary school students. Direct Instruction was first formally implemented at a preschool program for children from impoverished backgrounds at the University of Illinois (mid-1960s). The team implementing DI consisting of Siegfried Engelmann, Carl Bereiter, and Jean Osborn. The program incorporated short instructional periods usually 20 to 30 minutes a day. The instructional periods focused on language, reading, and math. The children showed vast improvement which led to further development of the approach . When further developing DI, they applied the same principles to create a formal instruc
https://en.wikipedia.org/wiki/Net6
Net6 was a startup founded in 2000 that created products in Security, Voice over IP VoIP protocols and SSL VPN. History Net6 was originally called WebUnwired and was founded in 2000 by Murli Thirumale (an Ex VP and GM at Hewlett-Packard), Goutham Rao (an Operating Systems Architect at Intel), Jon Thies and Russell Lentini. Thirumale was the CEO of Net6 and Rao was the CTO and Chief Architect. The company originally focused on secure application access from mobile devices, and later shifted focus toward VoIP protocols and SSL VPN technology. The company created two hardware appliances called the Application Gateway for VoIP and the Access Gateway for SSL VPNs. Partnerships Net6 secured OEM deals for its product lines in 2000 from Cisco Systems. In 2002 and 2003, Net6 added further OEMs for its product lines from Nortel Networks, Avaya Systems and Siemens. Net6 primarily sold its products through their partner channels. Funding In 2000, Net6, then known as WebUnwired secured $8M in series 'A' funding from Sierra Ventures and Olympic Venture Partners. In 2004, Net6 raised an additional $8M in series B funding from Bank of America venture partners. Acquisition In December 2004, Net6 Inc was acquired by Citrix Systems for $50M. The acquisition marked Citrix's entry into the telecommunications security space, and Citrix continues to market the Access Gateway product under the Citrix Access Gateway(tm) product name, led by the management team of Murli Thirumale (CEO), Goutham Rao (CTO), Russell Lentini and Jon Thies (Principal Architects), Gordon Payne (VP of Marketing) and Joe Eskew (VP of Sales). Recognition Access Gateway with 2007 Reader Trust Award in the 10th annual SC Magazine Award Gartner Reports Citrix Access Gateway in Leadership Quadrant
https://en.wikipedia.org/wiki/Insulin%20receptor%20substrate
Insulin receptor substrate (IRS) is an important ligand in the insulin response of human cells. IRS-1, for example, is an IRS protein that contains a phosphotyrosine binding-domain (PTB-domain). In addition, the insulin receptor contains a NPXY motif. The PTB-domain binds the NPXY sequence. Thus, the insulin receptor binds IRS. Genes (see also Insulin receptor substrate 1) (see also Insulin receptor substrate 2) - a pseudogene
https://en.wikipedia.org/wiki/Multithreading%20%28computer%20architecture%29
In computer architecture, multithreading is the ability of a central processing unit (CPU) (or a single core in a multi-core processor) to provide multiple threads of execution concurrently, supported by the operating system. This approach differs from multiprocessing. In a multithreaded application, the threads share the resources of a single or multiple cores, which include the computing units, the CPU caches, and the translation lookaside buffer (TLB). Where multiprocessing systems include multiple complete processing units in one or more cores, multithreading aims to increase utilization of a single core by using thread-level parallelism, as well as instruction-level parallelism. As the two techniques are complementary, they are combined in nearly all modern systems architectures with multiple multithreading CPUs and with CPUs with multiple multithreading cores. Overview The multithreading paradigm has become more popular as efforts to further exploit instruction-level parallelism have stalled since the late 1990s. This allowed the concept of throughput computing to re-emerge from the more specialized field of transaction processing. Even though it is very difficult to further speed up a single thread or single program, most computer systems are actually multitasking among multiple threads or programs. Thus, techniques that improve the throughput of all tasks result in overall performance gains. Two major techniques for throughput computing are multithreading and multiprocessing. Advantages If a thread gets a lot of cache misses, the other threads can continue taking advantage of the unused computing resources, which may lead to faster overall execution, as these resources would have been idle if only a single thread were executed. Also, if a thread cannot use all the computing resources of the CPU (because instructions depend on each other's result), running another thread may prevent those resources from becoming idle. Disadvantages Multiple threads can i
https://en.wikipedia.org/wiki/Hypogeusia
Hypogeusia is a reduced ability to taste things (to taste sweet, sour, bitter, or salty substances). The complete lack of taste is referred to as ageusia. Causes of hypogeusia include the chemotherapy drug bleomycin, an antitumor antibiotic, Bell's Palsy, and zinc deficiency among others.