source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/T-theory
T-theory is a branch of discrete mathematics dealing with analysis of trees and discrete metric spaces. General history T-theory originated from a question raised by Manfred Eigen in the late 1970s. He was trying to fit twenty distinct t-RNA molecules of the Escherichia coli bacterium into a tree. An important concept of T-theory is the tight span of a metric space. If X is a metric space, the tight span T(X) of X is, up to isomorphism, the unique minimal injective metric space that contains X. John Isbell was the first to discover the tight span in 1964, which he called the injective envelope. Andreas Dress independently constructed the same construct, which he called the tight span. Application areas Phylogenetic analysis, which is used to create phylogenetic trees. Online algorithms - k-server problem Recent developments Bernd Sturmfels, Professor of Mathematics and Computer Science at Berkeley, and Josephine Yu classified six-point metrics using T-theory.
https://en.wikipedia.org/wiki/Distance-transitive%20graph
In the mathematical field of graph theory, a distance-transitive graph is a graph such that, given any two vertices and at any distance , and any other two vertices and at the same distance, there is an automorphism of the graph that carries to and to . Distance-transitive graphs were first defined in 1971 by Norman L. Biggs and D. H. Smith. A distance-transitive graph is interesting partly because it has a large automorphism group. Some interesting finite groups are the automorphism groups of distance-transitive graphs, especially of those whose diameter is 2. Examples Some first examples of families of distance-transitive graphs include: The Johnson graphs. The Grassmann graphs. The Hamming Graphs. The folded cube graphs. The square rook's graphs. The hypercube graphs. The Livingstone graph. Classification of cubic distance-transitive graphs After introducing them in 1971, Biggs and Smith showed that there are only 12 finite trivalent distance-transitive graphs. These are: Relation to distance-regular graphs Every distance-transitive graph is distance-regular, but the converse is not necessarily true. In 1969, before publication of the Biggs–Smith definition, a Russian group led by Georgy Adelson-Velsky showed that there exist graphs that are distance-regular but not distance-transitive. The smallest distance-regular graph that is not distance-transitive is the Shrikhande graph, with 16 vertices and degree 6. The only graph of this type with degree three is the 126-vertex Tutte 12-cage. Complete lists of distance-transitive graphs are known for some degrees larger than three, but the classification of distance-transitive graphs with arbitrarily large vertex degree remains open.
https://en.wikipedia.org/wiki/Fas%20receptor
The Fas receptor, also known as Fas, FasR, apoptosis antigen 1 (APO-1 or APT), cluster of differentiation 95 (CD95) or tumor necrosis factor receptor superfamily member 6 (TNFRSF6), is a protein that in humans is encoded by the FAS gene. Fas was first identified using a monoclonal antibody generated by immunizing mice with the FS-7 cell line. Thus, the name Fas is derived from FS-7-associated surface antigen. The Fas receptor is a death receptor on the surface of cells that leads to programmed cell death (apoptosis) if it binds its ligand, Fas ligand (FasL). It is one of two apoptosis pathways, the other being the mitochondrial pathway. Gene FAS receptor gene is located on the long arm of chromosome 10 (10q24.1) in humans and on chromosome 19 in mice. The gene lies on the plus (Watson strand) and is 25,255 bases in length organized into nine protein encoding exons. Similar sequences related by evolution (orthologs) are found in most mammals. Protein Previous reports have identified as many as eight splice variants, which are translated into seven isoforms of the protein. Apoptosis-inducing Fas receptor is dubbed isoform 1 and is a type 1 transmembrane protein. Many of the other isoforms are rare haplotypes that are usually associated with a state of disease. However, two isoforms, the apoptosis-inducing membrane-bound form and the soluble form, are normal products whose production via alternative splicing is regulated by the cytotoxic RNA binding protein TIA1. The mature Fas protein has 319 amino acids, has a predicted molecular weight of 48 kiloDaltons and is divided into three domains: an extracellular domain, a transmembrane domain, and a cytoplasmic domain. The extracellular domain has 157 amino acids and is rich in cysteine residues. The transmembrane and cytoplasmic domains have 17 and 145 amino acids respectively. Exons 1 through 5 encode the extracellular region. Exon 6 encodes the transmembrane region. Exons 7-9 encode the intracellular region.
https://en.wikipedia.org/wiki/FADD
FAS-associated death domain protein, also called MORT1, is encoded by the FADD gene on the 11q13.3 region of chromosome 11 in humans. FADD is an adaptor protein that bridges members of the tumor necrosis factor receptor superfamily, such as the Fas-receptor, to procaspases 8 and 10 to form the death-inducing signaling complex (DISC) during apoptosis. As well as its most well known role in apoptosis, FADD has also been seen to play a role in other processes including proliferation, cell cycle regulation and development. Structure FADD is a 23 kDa protein, made up of 208 amino acids. It contains two main domains: a C terminal death domain (DD) and an N terminal death effector domain (DED). Each domain, although sharing very little sequence similarity, are structurally similar to one another, with each consisting of 6 α helices. The DD of FADD binds to receptors such as the Fas receptor at the plasma membrane via their DD. The interaction between the death domains are electrostatic interactions involving α helices 2 and 3 of the 6 helix domain. The DED binds to the DED of intracellular molecules such as procaspase 8. It is thought that this interaction occurs through hydrophobic interactions. Functions Extrinsic apoptosis Upon stimulation by the Fas ligand, the Fas receptor trimerises. Many receptors, including Fas, contain a cytoplasmic DD and are therefore named death receptors. FADD binds to the DD of this trimeric structure via its death domain resulting in unmasking of FADD's DED and subsequent recruitment of procaspase 8 and 10 via an interaction between the DEDs of both FADD and the procaspases. This generates a complex known as the death inducing signalling complex (DISC). Procaspase 8 and 10 are known as initiator caspases. These are inactive molecules, but when bought into close proximity with other procaspases of the same type, autocatalytic cleavage occurs at an aspartate residue within their own structures, resulting in an activated protein. This act
https://en.wikipedia.org/wiki/Distance-regular%20graph
In the mathematical field of graph theory, a distance-regular graph is a regular graph such that for any two vertices and , the number of vertices at distance from and at distance from depends only upon , , and the distance between and . Some authors exclude the complete graphs and disconnected graphs from this definition. Every distance-transitive graph is distance-regular. Indeed, distance-regular graphs were introduced as a combinatorial generalization of distance-transitive graphs, having the numerical regularity properties of the latter without necessarily having a large automorphism group. Intersection arrays It turns out that a graph of diameter is distance-regular if and only if there is an array of integers such that for all , gives the number of neighbours of at distance from and gives the number of neighbours of at distance from for any pair of vertices and at distance on . The array of integers characterizing a distance-regular graph is known as its intersection array. Cospectral distance-regular graphs A pair of connected distance-regular graphs are cospectral if they have the same intersection array. A distance-regular graph is disconnected if and only if it is a disjoint union of cospectral distance-regular graphs. Properties Suppose is a connected distance-regular graph of valency with intersection array . For all : let denote the -regular graph with adjacency matrix formed by relating pairs of vertices on at distance , and let denote the number of neighbours of at distance from for any pair of vertices and at distance on . Graph-theoretic properties for all . and . Spectral properties has distinct eigenvalues. if is a simple eigenvalue of for any eigenvalue multiplicity of unless is a complete multipartite graph. for any eigenvalue multiplicity of unless is a cycle graph or a complete multipartite graph. If is strongly regular, then and . Examples Some first examples of distance-re
https://en.wikipedia.org/wiki/Conference%20graph
In the mathematical area of graph theory, a conference graph is a strongly regular graph with parameters v, and It is the graph associated with a symmetric conference matrix, and consequently its order v must be 1 (modulo 4) and a sum of two squares. Conference graphs are known to exist for all small values of v allowed by the restrictions, e.g., v = 5, 9, 13, 17, 25, 29, and (the Paley graphs) for all prime powers congruent to 1 (modulo 4). However, there are many values of v that are allowed, for which the existence of a conference graph is unknown. The eigenvalues of a conference graph need not be integers, unlike those of other strongly regular graphs. If the graph is connected, the eigenvalues are k with multiplicity 1, and two other eigenvalues, each with multiplicity
https://en.wikipedia.org/wiki/Conference%20matrix
In mathematics, a conference matrix (also called a C-matrix) is a square matrix C with 0 on the diagonal and +1 and −1 off the diagonal, such that CTC is a multiple of the identity matrix I. Thus, if the matrix has order n, CTC = (n−1)I. Some authors use a more general definition, which requires there to be a single 0 in each row and column but not necessarily on the diagonal. Conference matrices first arose in connection with a problem in telephony. They were first described by Vitold Belevitch, who also gave them their name. Belevitch was interested in constructing ideal telephone conference networks from ideal transformers and discovered that such networks were represented by conference matrices, hence the name. Other applications are in statistics, and another is in elliptic geometry. For n > 1, there are two kinds of conference matrix. Let us normalize C by, first (if the more general definition is used), rearranging the rows so that all the zeros are on the diagonal, and then negating any row or column whose first entry is negative. (These operations do not change whether a matrix is a conference matrix.) Thus, a normalized conference matrix has all 1's in its first row and column, except for a 0 in the top left corner, and is 0 on the diagonal. Let S be the matrix that remains when the first row and column of C are removed. Then either n is evenly even (a multiple of 4), and S is antisymmetric (as is the normalized C if its first row is negated), or n is oddly even (congruent to 2 modulo 4) and S is symmetric (as is the normalized C). Symmetric conference matrices If C is a symmetric conference matrix of order n > 1, then not only must n be congruent to 2 (mod 4) but also n − 1 must be a sum of two square integers; there is a clever proof by elementary matrix theory in van Lint and Seidel. n will always be the sum of two squares if n − 1 is a prime power. Given a symmetric conference matrix, the matrix S can be viewed as the Seidel adjacency
https://en.wikipedia.org/wiki/Seidel%20adjacency%20matrix
In mathematics, in graph theory, the Seidel adjacency matrix of a simple undirected graph G is a symmetric matrix with a row and column for each vertex, having 0 on the diagonal, −1 for positions whose rows and columns correspond to adjacent vertices, and +1 for positions corresponding to non-adjacent vertices. It is also called the Seidel matrix or—its original name—the (−1,1,0)-adjacency matrix. It can be interpreted as the result of subtracting the adjacency matrix of G from the adjacency matrix of the complement of G. The multiset of eigenvalues of this matrix is called the Seidel spectrum. The Seidel matrix was introduced by J. H. van Lint and in 1966 and extensively exploited by Seidel and coauthors. The Seidel matrix of G is also the adjacency matrix of a signed complete graph KG in which the edges of G are negative and the edges not in G are positive. It is also the adjacency matrix of the two-graph associated with G and KG. The eigenvalue properties of the Seidel matrix are valuable in the study of strongly regular graphs.
https://en.wikipedia.org/wiki/Retiming
Retiming is the technique of moving the structural location of latches or registers in a digital circuit to improve its performance, area, and/or power characteristics in such a way that preserves its functional behavior at its outputs. Retiming was first described by Charles E. Leiserson and James B. Saxe in 1983. The technique uses a directed graph where the vertices represent asynchronous combinational blocks and the directed edges represent a series of registers or latches (the number of registers or latches can be zero). Each vertex has a value corresponding to the delay through the combinational circuit it represents. After doing this, one can attempt to optimize the circuit by pushing registers from output to input and vice versa - much like bubble pushing. Two operations can be used - deleting a register from each input of a vertex while adding a register to all outputs, and conversely adding a register to each input of vertex and deleting a register from all outputs. In all cases, if the rules are followed, the circuit will have the same functional behavior as it did before retiming. Formal description The initial formulation of the retiming problem as described by Leiserson and Saxe is as follows. Given a directed graph whose vertices represent logic gates or combinational delay elements in a circuit, assume there is a directed edge between two elements that are connected directly or through one or more registers. Let the weight of each edge be the number of registers present along edge in the initial circuit. Let be the propagation delay through vertex . The goal in retiming is to compute an integer lag value for each vertex such that the retimed weight of every edge is non-negative. There is a proof that this preserves the output functionality. Minimizing the clock period with network flow The most common use of retiming is to minimize the clock period. A simple technique to optimize the clock period is to search for the minimum feasibl
https://en.wikipedia.org/wiki/About%20Time%20%28book%29
About Time: Einstein's Unfinished Revolution (), published in 1995, is the second book written by Paul Davies, regarding the subject of time. His first book on time was his The Physics of Time Asymmetry (1977)(). The intended audience is the general public, rather than science academics. About Time explores selected mysteries of spacetime, following on from Albert Einstein's theory of relativity, which Davies believes does not fully explain time as humans experience it. The author explains The book delves into the nature of metaphysics, time, motion and gravity, covering a wide range of aspects surrounding the current cosmological debate, across 283 pages in great detail. It includes an index, a bibliography, and numerous diagrams. See also Basic introduction to the mathematics of curved spacetime Sense of time The Mind of God How to Build a Time Machine, 2002 fiction book by the same author
https://en.wikipedia.org/wiki/Java%20version%20history
The Java language has undergone several changes since JDK 1.0 as well as numerous additions of classes and packages to the standard library. Since J2SE 1.4, the evolution of the Java language has been governed by the Java Community Process (JCP), which uses Java Specification Requests (JSRs) to propose and specify additions and changes to the Java platform. The language is specified by the Java Language Specification (JLS); changes to the JLS are managed under JSR 901. In September 2017, Mark Reinhold, chief Architect of the Java Platform, proposed to change the release train to "one feature release every six months" rather than the then-current two-year schedule. This proposal took effect for all following versions, and is still the current release schedule. In addition to the language changes, other changes have been made to the Java Class Library over the years, which has grown from a few hundred classes in JDK 1.0 to over three thousand in J2SE 5. Entire new APIs, such as Swing and Java2D, have been introduced, and many of the original JDK 1.0 classes and methods have been deprecated. Some programs allow conversion of Java programs from one version of the Java platform to an older one (for example Java 5.0 backported to 1.4) (see Java backporting tools). Regarding Oracle Java SE Support Roadmap, version 21 is the latest one, and versions 21, 17, 11 and 8 are the currently supported long-term support (LTS) versions, where Oracle Customers will receive Oracle Premier Support. Oracle continues to release no-cost public Java 8 updates for development and personal use indefinitely. Oracle also continues to release no-cost public Java 17 LTS updates for all users, including commercial and production use until September 2024. In the case of OpenJDK, both commercial long-term support and free software updates are available from multiple organizations in the broader community. Java 21, the latest (4th) LTS, was released on September 19, 2023. Release table JDK 1.0
https://en.wikipedia.org/wiki/Birkhoff%20polytope
The Birkhoff polytope Bn (also called the assignment polytope, the polytope of doubly stochastic matrices, or the perfect matching polytope of the complete bipartite graph ) is the convex polytope in RN (where N = n2) whose points are the doubly stochastic matrices, i.e., the matrices whose entries are non-negative real numbers and whose rows and columns each add up to 1. It is named after Garrett Birkhoff. Properties Vertices The Birkhoff polytope has n! vertices, one for each permutation on n items. This follows from the Birkhoff–von Neumann theorem, which states that the extreme points of the Birkhoff polytope are the permutation matrices, and therefore that any doubly stochastic matrix may be represented as a convex combination of permutation matrices; this was stated in a 1946 paper by Garrett Birkhoff, but equivalent results in the languages of projective configurations and of regular bipartite graph matchings, respectively, were shown much earlier in 1894 in Ernst Steinitz's thesis and in 1916 by Dénes Kőnig. Because all of the vertex coordinates are zero or one, the Birkhoff polytope is an integral polytope. Edges The edges of the Birkhoff polytope correspond to pairs of permutations differing by a cycle: such that is a cycle. This implies that the graph of Bn is a Cayley graph of the symmetric group Sn. This also implies that the graph of B3 is a complete graph K6, and thus B3 is a neighborly polytope. Facets The Birkhoff polytope lies within an dimensional affine subspace of the n2-dimensional space of all matrices: this subspace is determined by the linear equality constraints that the sum of each row and of each column be one. Within this subspace, it is defined by n2 linear inequalities, one for each coordinate of the matrix, specifying that the coordinate be non-negative. Therefore, for , it has exactly n2 facets. For n = 2, there are two facets, given by a11 = a22 = 0, and a12 = a21 = 0. Symmetries The Birkhoff polytope Bn is both vert
https://en.wikipedia.org/wiki/Flux%20%28metabolism%29
Flux, or metabolic flux is the rate of turnover of molecules through a metabolic pathway. Flux is regulated by the enzymes involved in a pathway. Within cells, regulation of flux is vital for all metabolic pathways to regulate the pathway's activity under different conditions. Flux is therefore of great interest in metabolic network modelling, where it is analysed via flux balance analysis and metabolic control analysis. In this manner, flux is the movement of matter through metabolic networks that are connected by metabolites and cofactors, and is therefore a way of describing the activity of the metabolic network as a whole using a single characteristic. Metabolic flux It is easiest to describe the flux of metabolites through a pathway by considering the reaction steps individually. The flux of the metabolites through each reaction (J) is the rate of the forward reaction (Vf), less that of the reverse reaction (Vr): At equilibrium, there is no flux. Furthermore, it is observed that throughout a steady-state pathway, the flux is determined to varying degrees by all steps in the pathway. The degree of influence is measured by the flux control coefficient. Control of metabolic flux Control of flux through a metabolic pathway requires that The degree to which metabolic steps determine the metabolic flux varies based on the organisms' metabolic needs. The change in flux that occurs due to the above requirement being communicated to the rest of the metabolic pathway in order to maintain a steady-state. Control of flux in a metabolic pathways: The control of flux is a systemic property, that is it depends, to varying degrees, on all interactions in the system. The control of flux is measured by the flux control coefficient In a linear chain of reactions, the flux control coefficient will have values between zero and one. A step with a flux control coefficient of zero means that, that particular step, has no influence over the steady-state flux. A step in a linea
https://en.wikipedia.org/wiki/Eiki
(Formerly ) is a Japanese company that manufactures LCD and DLP projectors, related accessories and overhead projectors. History Eiki was founded in 1953 in Osaka, Japan by four founders (M. Matsuura, S. Yagi, K. Sekino and Y. Minagawa). Initially the focus of the company was producing technology for classroom instruction but later on the company focused more on producing 16 mm movie projectors for other fields. The name Eiki comes from the Japanese term Eishaki meaning projector. Eiki 16 mm projectors included only half of the moving parts of popular projectors, thus making them less costly and easier to maintain. They were the largest manufacturer of such projectors. In 1974, Eiki opened Eiki International, Inc., their USA division in Laguna Niguel, California to distribute its products in the United States. In 1986, the company acquired the business unit of the Bell & Howell company that had originated the audio visual industry some 50 years earlier. In 1988, Eiki Canada was created as a subsidiary of Eiki International, Inc. In 1995, Eiki Deutschland, GmbH became the company's first wholly owned office in Europe. And, in 1997 Eiki Czech was founded to establish a network of dealers across central and eastern Europe.
https://en.wikipedia.org/wiki/Quasinormal%20operator
In operator theory, quasinormal operators is a class of bounded operators defined by weakening the requirements of a normal operator. Every quasinormal operator is a subnormal operator. Every quasinormal operator on a finite-dimensional Hilbert space is normal. Definition and some properties Definition Let A be a bounded operator on a Hilbert space H, then A is said to be quasinormal if A commutes with A*A, i.e. Properties A normal operator is necessarily quasinormal. Let A = UP be the polar decomposition of A. If A is quasinormal, then UP = PU. To see this, notice that the positive factor P in the polar decomposition is of the form (A*A), the unique positive square root of A*A. Quasinormality means A commutes with A*A. As a consequence of the continuous functional calculus for self-adjoint operators, A commutes with P = (A*A) also, i.e. So UP = PU on the range of P. On the other hand, if h ∈ H lies in kernel of P, clearly UP h = 0. But PU h = 0 as well. because U is a partial isometry whose initial space is closure of range P. Finally, the self-adjointness of P implies that H is the direct sum of its range and kernel. Thus the argument given proves UP = PU on all of H. On the other hand, one can readily verify that if UP = PU, then A must be quasinormal. Thus the operator A is quasinormal if and only if UP = PU. When H is finite dimensional, every quasinormal operator A is normal. This is because that in the finite dimensional case, the partial isometry U in the polar decomposition A = UP can be taken to be unitary. This then gives In general, a partial isometry may not be extendable to a unitary operator and therefore a quasinormal operator need not be normal. For example, consider the unilateral shift T. T is quasinormal because T*T is the identity operator. But T is clearly not normal. Quasinormal invariant subspaces It is not known that, in general, whether a bounded operator A on a Hilbert space H has a nontrivial invariant subspace. However, whe
https://en.wikipedia.org/wiki/Medial%20umbilical%20fold
The medial umbilical fold is an elevation of the peritoneum (on either side of the body) lining the inner surface of the lower anterior abdominal wall formed by the underlying medial umbilical ligament (the obliterated distal portion of the umbilical artery) which the peritoneum covers. Superiorly, the two medial umbilical folds converge towards the midline to meet and terminate at the umbilicus. The unpaired midline median umbilical ligament lies medially to each medial umbilical fold; a lateral umbilical fold lies lateral to either medial umbilical fold. A supravesical fossa lies between the median umbilical fold and either medial umbilical fold on either side. A medial inguinal fossa lies between the median umbilical fold and lateral umbilical fold of the same side on either side.
https://en.wikipedia.org/wiki/Medial%20inguinal%20fossa
The medial inguinal fossa is a depression located within the inguinal triangle on the peritoneal surface of the anterior abdominal wall between the ridges formed by the lateral umbilical fold and the medial umbilical ligament, corresponding to the superficial inguinal ring. Clinical significance It is associated with direct inguinal hernias. See also Lateral inguinal fossa Abdomen
https://en.wikipedia.org/wiki/Supravesical%20fossa
The supravesical fossa is a depression upon the inner (i.e. peritoneal) surface of the anterior abdominal wall superior to the bladder formed by a reflection of the peritoneum onto the superior surface of the bladder. It is bounded by the medial umbilical fold and median umbilical fold. The level of the supravesicular fossa varies according to the fullness of the bladder as the peritoneum is not firmly attached to the upper surface of the bladder (the only region where this is the case).
https://en.wikipedia.org/wiki/Cremasteric%20fascia
The cremasteric fascia is a fascia in the scrotum. As the cremaster descends, it forms a series of loops which differ in thickness and length in different subjects. At the upper part of the cord the loops are short, but they become in succession longer and longer, the longest reaching down as low as the testis, where a few are inserted into the tunica vaginalis. These loops are united together by areolar tissue, and form a thin covering over the cord and testis, the cremasteric fascia. The cremasteric fascia lies between the more superficial external spermatic fascia and the deeper internal spermatic fascia. It is a continuation of the aponeurosis of the abdominal internal oblique muscle.
https://en.wikipedia.org/wiki/External%20spermatic%20fascia
The external spermatic fascia (intercrural or intercolumnar fascia) is a thin membrane, prolonged downward around the surface of the spermatic cord and testis. It is separated from the dartos tunic by loose areolar tissue. It is occasionally referred to as 'Le Fascia de Webster' after an anatomist who once described it. Structure The external spermatic fascia is derived from the aponeurosis of the abdominal external oblique muscle. It is acquired by the spermatic cord at the superficial inguinal ring.
https://en.wikipedia.org/wiki/Internal%20spermatic%20fascia
The internal spermatic fascia (infundibuliform fascia, or Le deuxième fascia de Webster) is a thin layer, which loosely invests the spermatic cord. Structure The internal spermatic fascia is derived from the transversalis fascia. It is acquired by the spermatic cord at the deep inguinal ring. It has very little lymphatic drainage. It is mainly supplied by sensory afferents and the sympathetic nervous system. Additional images
https://en.wikipedia.org/wiki/Lower%20risk
The ICUN has many ranks that define an animal's population and risk of extinction. Species are classified into one of nine Red List Categories: Extinct, Extinct in the Wild, Critically Endangered, Endangered, Vulnerable, Near Threatened, Least Concern, Data Deficient, and Not Evaluated. They formerly used a identification called "lower risk" to describe some animals. The ICUN defines an animal with the conservation status of lower risk is one with populations levels high enough to ensure its survival. Animals with this status do not qualify as being threatened or extinct. However, natural disasters or certain human activities would cause them to change to either of these classifications. When it was in use, this classification was sub-divided into three types: Conservation dependent - where cessation of current conservation measures may result in it being classified at a higher risk level. Near threatened - may become vulnerable to endangerment in the near future but not meeting the criteria. Least concern - where neither of the two above apply. See also Biodiversity action plan Endangered species
https://en.wikipedia.org/wiki/Big%20Red%20%28Western%20Kentucky%20University%29
Big Red is the mascot of Western Kentucky University's sports teams, the "Hilltoppers" and "Lady Toppers". It is a red, furry being created by Ralph Carey in 1979. Big Red is meant to symbolize the spirit of WKU students and alumni as well as the sports teams' nickname, the "Hilltoppers," a name chosen because the school's campus sits atop a hill 232 feet above the Barren River flowing through WKU's home city of Bowling Green. Creation Prior to the start of the 1979 college basketball season, WKU student Ralph Carey volunteered to create a mascot for the school's sports teams. It was hoped a mascot would generate enthusiasm and supplement the iconic red towels waved by fans in the stands. Carey said he wanted to create something unique that stayed as far away as possible from the stereotype many have of Kentuckians. Although he liked the antics of the San Diego Padres' chicken mascot and initially sketched a bear wearing a sweater emblazoned with the letter "W", he ultimately decided not to use a known animal or entity. Carey eventually presented the sketch of a red, furry blob-like mascot concept to a committee which included future university president Gary Ransdell. When asked what the character should be called, Carey suggested 'Big Red' as an acknowledgement of the nickname given to WKU sports teams. The concept was approved. After some refinement, Carey constructed the first Big Red costume by hand. – It consisted of "air conditioner foam, fake fur, plastic tubing and aluminum framing". The materials cost was roughly $300 –. Carey then performed in the suit he created when Big Red debuted at a home basketball game on December 1, 1979, in WKU's E.A. Diddle Arena. Carey graduated in 1980. The suit was then handed down to fellow student Mark Greer. Greer was the first to portray the character at a WKU football game in the fall of that year. Historically, tryouts for students who want to portray Big Red are held in April of each year. The university library mai
https://en.wikipedia.org/wiki/Tubuli%20seminiferi%20recti
The tubuli seminiferi recti (also known as the tubuli recti, tubulus rectus, or straight seminiferous tubules) are structures in the testicle connecting the convoluted region of the seminiferous tubules to the rete testis, although the tubuli recti have a different appearance distinguishing them from these two structures. They enter the fibrous tissue of the mediastinum, and pass upward and backward, forming, in their ascent, a close network of anastomosing tubes which are merely channels in the fibrous stroma, lined by flattened epithelium, and having no proper walls; this constitutes the rete testis. Only Sertoli cells line the terminal ends of the seminiferous tubules (tubuli recti).
https://en.wikipedia.org/wiki/Pampiniform%20plexus
The pampiniform plexus (from Latin pampinus, a tendril, + forma, form) is a venous plexus – a network of many small veins found in the human male spermatic cord, and the suspensory ligament of the ovary. In the male, it is formed by the union of multiple testicular veins from the back of the testis and tributaries from the epididymis. In the male The veins of the plexus ascend along the spermatic cord in front of the vas deferens. Below the superficial inguinal ring they unite to form three or four veins, which pass along the inguinal canal, and, entering the abdomen through the deep inguinal ring, coalesce to form two veins. These again unite to form a single vein, the testicular vein, which opens on the right side into the inferior vena cava, at an acute angle, and on the left side into the left renal vein, at a right angle. The pampiniform plexus forms the chief mass of the cord. In addition to its function in venous return from the testes, the pampiniform plexus also plays a role in the temperature regulation of the testes. It acts as a countercurrent heat exchanger, cooling blood in adjacent arteries. An abnormal enlargement of the pampiniform plexus is a medical condition called varicocele. In the female In females, the pampiniform plexus drains the ovaries. The right ovary drains to the pampiniform plexus to the ovarian vein to the inferior vena cava. The left ovary drains to the pampiniform plexus, left ovarian vein, then the left renal vein, to the inferior vena cava. While varicocele is the diagnostic term for swelling in the valveless venous distribution of the male pampiniform plexus, this embryological structure, common to males and females, is often incidentally noted to be swollen during laproscopic examinations in both symptomatic and asymptomatic females. Diagnosis of female varicocele, properly called pelvic compression syndrome, should be expected to be as frequent as male varicocele (15% of healthy asymptomatic men which are thought to deve
https://en.wikipedia.org/wiki/Mesorchium
The testes, at an early period of foetal life, are placed at the back part of the abdominal cavity, behind the peritoneum, and each is attached by a peritoneal fold, the mesorchium, to the mesonephros. See also mesentery mesovarium Mesorchium is the fibrous sheath which attaches vascular and avascular structures of spermatic cord together.
https://en.wikipedia.org/wiki/Aluminum%20Model%20Toys
Aluminum Model Toys (AMT) is a toy manufacturing brand founded in Troy, Michigan, in 1948 by West Gallogly Sr. AMT became known for manufacturing 1/25 scale plastic automobile dealer promotional model cars and friction motor models, and pioneered the annual 3-in-1 model kit buildable in stock, custom, or hot-rod versions. The company made a two-way deal in 1966 with Desilu Productions to produce a line of Star Trek models and to produce a 3/4 scale exterior and interior filming set of the Galileo shuttlecraft. It was also known for producing model trucks and movie and TV vehicles. The AMT brand was bought in 1978 by the Lesney company of UK, then by competitor Ertl in 1983, then by the Round 2 company in 2012. History Beginning Because Gallogly had solid connections with Ford Motor Company, he was able to place his first models exclusively in Ford dealerships, starting a long promotional relationship. Gallogly's first model was a 1947–1948 Ford Fordor sedan made of cast aluminum and painted with official Ford paint. After issuing successful Ford sedan models, the company set up shop on Eight Mile Road outside Detroit. By 1948, injection plastic molding was already being used by Product Miniature Corporation (PMC). After the first Ford aluminum promotional model was offered, aluminum was abandoned. Different colors of plastic could now be used, so the company name was quietly changed to AMT, which deemphasized the word "aluminum". For example, AMT's 1949 and 1950 Ford and Plymouth sedans were its first plastic models, along with the 1950 Studebaker coupe. These promos often had wind-up motors which could not be seen through the shiny silver-tinted windows. They had metal chassis and diecast metal chrome-plated bumpers, which were later replaced with chrome-plated plastic. Often, official factory paint colors were applied to the models. The company's first commercial products were pre-assembled plastic promotional models, which were only available through automob
https://en.wikipedia.org/wiki/Lobules%20of%20testis
The lobules of testis are of partitions of the testis formed by septa of testis. The lobules of testis contain the tightly coiled seminiferous tubule. There are some hundreds of lobules in a testicle. Anatomy They differ in size according to their position, those in the middle of the gland being larger and longer. The lobules are conical in shape, the base being directed toward the circumference of the organ, the apex toward the mediastinum testis. Each lobule is contained in one of the intervals between the fibrous septa which extend between the mediastinum testis and the tunica albuginea, and consists of from one to three, or more, minute convoluted tubes, the seminiferous tubules (tubuli seminiferi). Each tubule extends from the base of the lobule where the tubule ends blindly towards the apex of the lobule. Additional images
https://en.wikipedia.org/wiki/Grypania
Grypania is an early, tube-shaped fossil from the Proterozoic eon. The organism, with a size over one centimeter and consistent form, could have been a giant bacterium, a bacterial colony, or a eukaryotic alga. The oldest probable Grypania fossils date to about 2100 million years ago (redated from the previous 1870 million) and the youngest extended into the Ediacaran period. This implies that the time range of this taxon extended for 1200 million years.
https://en.wikipedia.org/wiki/Treasure%20Island%20%281984%20video%20game%29
Treasure Island is a 1984 computer game based on the 1883 novel Treasure Island by Robert Louis Stevenson. In the game, the player takes on the role of the book's protagonist Jim Hawkins and has to battle through hordes of pirates before a final showdown with Long John Silver. The game uses a flip-screen style. The programming was done by Greg Duddle, and the music was rendered by David Whittaker. The version for the Commodore 64 and ZX Spectrum was released in 1984, and the Commodore Plus/4 version was from 1985. The latter version is bug free and has minor differences. On the Commodore 64 and ZX Spectrum it is impossible to get the maximum score because of bugs. The Commodore Plus/4 version was also converted for the Corvette in 1989. Gameplay and premise Players control Treasure Island protagonist Jim Hawkins, using various tools to get through the levels with a limited number of supplies. Enemy pirates act as obstacles for progress and throw cutlasses when Jim is in range, which can be taken and used by players to defeat enemies. On the Plus/4 or Corvette it is possible to get 101% final score. Legacy Having changed the theme from pirates to Asia, the similar game was released as The Willow Pattern Adventure for the Commodore 64, Amstrad CPC, and ZX Spectrum. See also Another adventure game named Treasure Island was published by Windham Classics in the year 1985.
https://en.wikipedia.org/wiki/Birch%E2%80%93Tate%20conjecture
The Birch–Tate conjecture is a conjecture in mathematics (more specifically in algebraic K-theory) proposed by both Bryan John Birch and John Tate. Statement In algebraic K-theory, the group K2 is defined as the center of the Steinberg group of the ring of integers of a number field F. K2 is also known as the tame kernel of F. The Birch–Tate conjecture relates the order of this group (its number of elements) to the value of the Dedekind zeta function . More specifically, let F be a totally real number field and let N be the largest natural number such that the extension of F by the Nth root of unity has an elementary abelian 2-group as its Galois group. Then the conjecture states that Status Progress on this conjecture has been made as a consequence of work on Iwasawa theory, and in particular of the proofs given for the so-called "main conjecture of Iwasawa theory."
https://en.wikipedia.org/wiki/IBM%20Network%20Control%20Program
The IBM Network Control Program, or NCP, was software that ran on a 37xx communications controller and managed communication with remote devices. NCP provided services comparable to the data link layer and Network Layer functions in the OSI model of a Wide area network. Overview The original IBM Network Control Program ran on the 3705-I and supported access to older devices by application programs using Telecommunications Access Method (TCAM). With the advent of Systems Network Architecture (SNA), NCP was enhanced to connect cluster controllers (such as the IBM 3270) to application programs using TCAM and later to application programs using Virtual Telecommunications Access Method (VTAM). Subsequent versions of NCP were released to run on the IBM 3704, IBM 3705-II, IBM 3725. IBM 3720, or IBM 3745 Communications Controllers, all of which SNA defined as a SNA Physical Unit Type 4 (PU4). A PU4 usually had SDLC links to remote cluster controllers (PU1/PU2) or to other PU4s. Polling and addressing of the cluster controllers was performed by the NCP without mainframe intervention. In 2005 IBM introduced Communications Controller for Linux (CCL), a software product that allows an unmodified NCP to run on the mainframe, eliminating the need for a separate communications controller in some cases. A local NCP connected to a System/370 channel via single address. A remote NCP had no direct connection to a mainframe but was connected to a local NCP via one or more high-speed SDLC links. Notes
https://en.wikipedia.org/wiki/Architectural%20sculpture
Architectural sculpture is the use of sculptural techniques by an architect and/or sculptor in the design of a building, bridge, mausoleum or other such project. The sculpture is usually integrated with the structure, but freestanding works that are part of the original design are also considered to be architectural sculpture. The concept overlaps with, or is a subset of, monumental sculpture. It has also been defined as "an integral part of a building or sculpture created especially to decorate or embellish an architectural structure." Architectural sculpture has been employed by builders throughout history, and in virtually every continent on earth save pre-colonial Australia. Egyptian Modern understanding of ancient Egyptian architecture is based mainly on the religious monuments that have survived since antiquity, which are carved stone with post and lintel construction. These religious monuments dedicated to the gods or pharaohs were designed with a great deal of architectural sculpture inside and out: engaged statues, carved columns and pillars, and wall surfaces carved with bas-reliefs. The classic examples of Egyptian colossal monuments (the Great Sphinx of Giza, the Abu Simbel temples, the Karnak Temple Complex, etc.) represent thoroughly integrated combinations of architecture and sculpture. Obelisks, elaborately carved from a single block of stone, were usually placed in pairs to flank the entrances to temples and pyramids. Reliefs are also common in Egyptian building, depicting scenes of everyday life and often accompanied by hieroglyphics. Assyro-Babylonian The Fertile Crescent architectural sculptural tradition began when Ashurnasirpal II moved his capitol to the city of Nimrud around 879 BCE. This site was located near a major deposit of gypsum (alabaster). This fairly easy to cut stone could be quarried in large blocks that allowed them to be easily carved for the palaces that were built there. The early style developed out of an already
https://en.wikipedia.org/wiki/Bulk%20synchronous%20parallel
The bulk synchronous parallel (BSP) abstract computer is a bridging model for designing parallel algorithms. It is similar to the parallel random access machine (PRAM) model, but unlike PRAM, BSP does not take communication and synchronization for granted. In fact, quantifying the requisite synchronization and communication is an important part of analyzing a BSP algorithm. History The BSP model was developed by Leslie Valiant of Harvard University during the 1980s. The definitive article was published in 1990. Between 1990 and 1992, Leslie Valiant and Bill McColl of Oxford University worked on ideas for a distributed memory BSP programming model, in Princeton and at Harvard. Between 1992 and 1997, McColl led a large research team at Oxford that developed various BSP programming libraries, languages and tools, and also numerous massively parallel BSP algorithms, including many early examples of high-performance communication-avoiding parallel algorithms and recursive "immortal" parallel algorithms that achieve the best possible performance and optimal parametric tradeoffs. With interest and momentum growing, McColl then led a group from Oxford, Harvard, Florida, Princeton, Bell Labs, Columbia and Utrecht that developed and published the BSPlib Standard for BSP programming in 1996. Valiant developed an extension to the BSP model in the 2000s, leading to the publication of the Multi-BSP model in 2011. In 2017, McColl developed a major new extension of the BSP model that provides fault tolerance and tail tolerance for large-scale parallel computations in AI, Analytics and high-performance computing (HPC). See also The BSP model Overview A BSP computer consists of the following: Components capable of processing and/or local memory transactions (i.e., processors), A network that routes messages between pairs of such components, and A hardware facility that allows for the synchronization of all or a subset of components. This is commonly interpreted as a s
https://en.wikipedia.org/wiki/Synopses%20of%20the%20British%20Fauna
Synopses of the British Fauna is a series of identification guides, published by The Linnean Society and The Estuarine and Coastal Sciences Association. Each volume in the series provides and in-depth analysis of a group of animals and is designed to bridge the gap between the standard field guide and more specialised monograph or treatise. The series is now published by The Field Studies Council on behalf of The Linnean Society and The Estuarine and Coastal Sciences Association. The series is designed for use in the field and is kept as user friendly as possible with technical terminology kept to a minimum and a glossary of terms provided, although the complexity of the subject matter makes the books more suitable for the more experienced practitioner. History of the series On 11 March 1943, at a meeting of The Linnean Society in Burlington House, TH Savoy presented his "Synopsis of the Opiliones" (Harvestmen). It was so well received that a decision was made there and then to publish it as the first of a series of "ecological fauna lists". Re-launched by Dr Doris Kermack in the mid-1960s, the New Series of Synopses of the British Fauna went from strength to strength. From number 13, the series had been jointly sponsored by The Estuarine and Coastal Sciences Association and Dr RSK Barnes became co-editor. From 1993, the series has been published by The Field Studies Council and benefits from association with the extensive testing undertaken as part of the AIDGAP project. Volumes The series contains the following volumes, many of which are out of print. Many of the volumes have been updated and reprinted under slightly different names to reflect either taxonomic changes or advances in the understanding of a group. Volume 62: Marine Gastropods 3: Neogastropoda (Wigham and Graham) 2018 Volume 61: Marine Gastropods 2: Littorinimorpha and other unassigned Caenogastropoda (Wigham and Graham) 2017 Volume 60: Marine Gastropods 1: Patellogastropoda and Vetigas
https://en.wikipedia.org/wiki/Encyclopedia%20of%20Statistical%20Sciences
The Encyclopedia of Statistical Sciences is an encyclopaedia of statistics published by John Wiley & Sons. The first edition, in nine volumes, was published in 1982; it was edited by Norman Lloyd Johnson and Samuel Kotz. The second edition, in 16 volumes, was published in 2006; the senior editor was Samuel Kotz. See also International Encyclopedia of Statistical Science
https://en.wikipedia.org/wiki/Raspberry%20ripple
Raspberry ripple is a popular flavour of ice cream particularly in Great Britain and also elsewhere . It consists of raspberry syrup injected into vanilla ice cream. "Raspberry ripple" was also the name given to other raspberry-flavoured food products in the 1920s. The term "ripple" in ice cream manufacture and consumption may have originated in the United States where from the 1930s, it was used to denote any type of ice cream ribboned through with coloured and flavoured syrup. Around this time, machinery had been developed which would allow ice cream to incorporate fruit paste separately in a marbled effect. Raspberry ripple has been a popular variant ever since. In popular culture Raspberry ripple is Cockney rhyming slang for nipple and cripple. See also Millie's Cookies Wall's (ice cream)
https://en.wikipedia.org/wiki/Council%20for%20the%20Mathematical%20Sciences
The Council for the Mathematical Sciences (CMS) is an organisation that represents all types of British mathematicians at a national level. It is not a professional institution, but a collaboration of them. History It was established in 2001 by the Institute of Mathematics and its Applications, the London Mathematical Society and the Royal Statistical Society to provide a forum for mathematics. Purpose to represent the interests of mathematics to government, Research Councils and other public bodies; to promote good practice in the mathematics curriculum and its teaching and learning at all levels and in all sectors of education; to respond coherently and effectively to proposals from government and other public bodies which may affect the mathematical community; to work with other bodies such as the Joint Mathematical Council and HoDoMS. Structure It is situated off the A4200 in Russell Square, next to the University of London in the offices of the London Mathematical Society. It is accessed via the Russell Square tube station on the Piccadilly Line.
https://en.wikipedia.org/wiki/Joint%20Mathematical%20Council
The Joint Mathematical Council (JMC) of the United Kingdom was formed in 1963 to "provide co-ordination between the Constituent Societies and generally to promote the advancement of mathematics and the improvement of the teaching of mathematics". The JMC serves as a forum for discussion between societies and for making representations to government and other bodies and responses to their enquiries. It is concerned with all aspects of mathematics at all levels from primary to higher education. Members The participating bodies are Adults Learning Mathematics Association of Teachers of Mathematics Association of Mathematics Education Teachers British Society for the History of Mathematics British Society for Research into Learning Mathematics HoDoMS Edinburgh Mathematical Society Institute of Mathematics and its Applications London Mathematical Society Mathematical Association Mathematics in Education and Industry National Association for Numeracy and Mathematics in Colleges National Association of Mathematics Advisers National Numeracy STEM Learning NRICH Operational Research Society Royal Academy of Engineering Royal Statistical Society Scottish Mathematical Council United Kingdom Mathematics Trust The observing bodies are Advisory Committee on Mathematics Education Department for Education (England) Department of Education (Northern Ireland) Education Scotland National Centre for Excellence in Teaching Mathematics Office for Standards in Education The Office of Qualifications and Examinations Regulation The Royal Society Scottish Qualifications Authority Welsh Government Education Directorate Leadership The Chair of the JMC is Andy Noyes, Professor of Education at the University of Nottingham and is a member of the Royal Society Advisory Committee on Mathematics Education.
https://en.wikipedia.org/wiki/Erol%20Gelenbe
Sami Erol Gelenbe (born 22 August 1945, Istanbul), a Turkish and French computer scientist, electronic engineer and applied mathematician, pioneered the field of Computer System and Network Performance in Europe. Active in European Union research projects, he is Professor in the Institute of Theoretical and Applied Informatics of the Polish Academy of Sciences (2017-), Associate Researcher in the I3S Laboratory (CNRS, University of Côte d'Azur, Nice) and Abraham de Moivre Laboratory (CNRS, Imperial College). Previous Chaired professorships include the University of Liège (1974-1979), University Paris-Saclay (1979-1986), University Paris Descartes (1986-2005), ECE Chair at Duke University (1993-1998), University Chair Professor and Director of EECS, University of Central Florida (1998-2003), and Dennis Gabor Professor and Head of Intelligent Systems and Networks, Imperial College (2003-2019). Biography Gelenbe of and Maria Sacchet Gelenbe and of Yusuf Âli Gelenbe, a descendant of the 18th-century Ottoman mathematician Gelenbevi Ismail Efendi and nephew of the Ottoman Sheyhulislam Mehmet Cemaleddin Efendi, Erol graduated from Ankara Koleji and the Middle East Technical University, Ankara, where he won the K.K. Clarke Research Award for his undergraduate thesis on "partial flux switching magnetic memory systems". Awarded a Fulbright Fellowship, he completed a master's degree and PhD thesis degree at the Polytechnic University on "Stochastic automata with structural restrictions" under Prof. Edward J. Smith. Career He then joined the University of Michigan as an assistant professor, and on leave from Michigan, he founded the Modeling and Performance Evaluation of Computer Systems research group at INRIA (France), and was a visiting associate professor at the university of Paris 13 University. In 1971 he was elected to a chair in Computer Science at the University of Liège in Belgium, but was appointed in 1974, joining Professor Danny Ribbens while remaining a research
https://en.wikipedia.org/wiki/Quantum%20cohomology
In mathematics, specifically in symplectic topology and algebraic geometry, a quantum cohomology ring is an extension of the ordinary cohomology ring of a closed symplectic manifold. It comes in two versions, called small and big; in general, the latter is more complicated and contains more information than the former. In each, the choice of coefficient ring (typically a Novikov ring, described below) significantly affects its structure, as well. While the cup product of ordinary cohomology describes how submanifolds of the manifold intersect each other, the quantum cup product of quantum cohomology describes how subspaces intersect in a "fuzzy", "quantum" way. More precisely, they intersect if they are connected via one or more pseudoholomorphic curves. Gromov–Witten invariants, which count these curves, appear as coefficients in expansions of the quantum cup product. Because it expresses a structure or pattern for Gromov–Witten invariants, quantum cohomology has important implications for enumerative geometry. It also connects to many ideas in mathematical physics and mirror symmetry. In particular, it is ring-isomorphic to symplectic Floer homology. Throughout this article, X is a closed symplectic manifold with symplectic form ω. Novikov ring Various choices of coefficient ring for the quantum cohomology of X are possible. Usually a ring is chosen that encodes information about the second homology of X. This allows the quantum cup product, defined below, to record information about pseudoholomorphic curves in X. For example, let be the second homology modulo its torsion. Let R be any commutative ring with unit and Λ the ring of formal power series of the form where the coefficients come from R, the are formal variables subject to the relation , for every real number C, only finitely many A with ω(A) less than or equal to C have nonzero coefficients . The variable is considered to be of degree , where is the first Chern class of the tangent bundle
https://en.wikipedia.org/wiki/List%20of%20NAS%20manufacturers
The following notable companies manufacture Network-attached Storage devices. See also File area network Disk enclosure Network architecture Global Namespace Server (computing)
https://en.wikipedia.org/wiki/Made%20in%20NY
Made in NY is an incentive program and marketing campaign of the City of New York Mayor's Office of Film, Theatre & Broadcasting. Under the program, television and film productions which complete at least 75% of their shooting and rehearsal work in New York City are eligible for marketing incentives and tax credits, and can display the Made in NY logo in their closing credits. The logo was created in 2005 by graphic designer Rafael Esquer. Made In NY also has a training program called the Made in NY Production Assistant Training Program. This trains New York City residents as production assistants and a graduate of the training program is named PA of the Month by the Mayor's Office of Film. The New York Daily News profiled James Adames, the June 2011 PA of the Month. Adames started as a Production assistant and is now working as a TV/Film Producer & Location Manager. Made in NY is also commissioning a gut renovation of two buildings in Bush Terminal, Brooklyn. The buildings are being designed by Brooklyn architecture firm, nARCHITECTS and are intended to become a garment production hub for New York City's garment industry formerly centered in Manhattan's garment district. See also Media of New York City Mayor's Office of Film, Theatre & Broadcasting NYC Media WNYE (FM) WNYE-TV
https://en.wikipedia.org/wiki/Translational%20partition%20function
In statistical mechanics, the translational partition function, is that part of the partition function resulting from the movement (translation) of the center of mass. For a single atom or molecule in a low pressure gas, neglecting the interactions of molecules, the canonical ensemble can be approximated by: where Here, V is the volume of the container holding the molecule (volume per single molecule so, e.g., for 1 mole of gas the container volume should be divided by the Avogadro number), Λ is the Thermal de Broglie wavelength, h is the Planck constant, m is the mass of a molecule, kB is the Boltzmann constant and T is the absolute temperature. This approximation is valid as long as Λ is much less than any dimension of the volume the atom or molecule is in. Since typical values of Λ are on the order of 10-100 pm, this is almost always an excellent approximation. When considering a set of N non-interacting but identical atoms or molecules, when QT ≫ N , or equivalently when ρ Λ ≪ 1 where ρ is the density of particles, the total translational partition function can be written The factor of N! arises from the restriction of allowed N particle states due to Quantum exchange symmetry. Most substances form liquids or solids at temperatures much higher than when this approximation breaks down significantly. See also Rotational partition function Vibrational partition function Partition function (mathematics)
https://en.wikipedia.org/wiki/Bernard%20Silverman
Sir Bernard Walter Silverman, (born 22 February 1952) is a British statistician and former Anglican clergyman. He was Master of St Peter's College, Oxford, from 1 October 2003 to 31 December 2009. He is a member of the Statistics Department at Oxford University, and has also been attached to the Wellcome Trust Centre for Human Genetics, the Smith School of Enterprise and the Environment, and the Oxford-Man Institute of Quantitative Finance. He has been a member of the Council of Oxford University and of the Council of the Royal Society. He was briefly president of the Royal Statistical Society in January 2010, a position from which he stood down upon announcement of his appointment as Chief Scientific Advisor to the Home Office. He was awarded a knighthood in the 2018 New Years Honours List, "For public service and services to Science". Education Silverman was educated at the City of London School, an independent day school in Central London, from 1961 to 1969, on a Carpenter Scholarship (similar to today's full bursary), followed by Jesus College at the University of Cambridge. Career 1970–73 Undergraduate, Jesus College, Cambridge. 1973–74 Graduate Student, Jesus College, Cambridge. 1974–75 Research Student, Statistical Laboratory, Cambridge. 1975–77 Research Fellow of Jesus College, Cambridge. 1976–77 Calculator Development Manager, Sinclair Radionics Ltd. 1977–78 Junior Lecturer in Statistics, Oxford University and Weir Junior Research Fellow of University College, Oxford. 1978–80 Lecturer in Statistics, University of Bath. 1981–84 Reader in Statistics, University of Bath. 1984 and 1992–93 Head of Statistics Group, University of Bath. 1984–93 Professor of Statistics, University of Bath. 1988–91 Head of School of Mathematical Sciences, University of Bath. 1993–2003 Professor of Statistics, University of Bristol 1993–97 and 1998–99 Head of Statistics Group, University of Bristol 1999–2003 Henry Overton Wills Professor of Mathematics, University o
https://en.wikipedia.org/wiki/Preferential%20alignment
The preferential alignment is a criterion of an orientation of a molecule or atom. The preferential alignment can be related to the formation of the crystal structure of an amorphous structure. For a polymer material with liquid crystals, the liquid crystals are molecules shaped like rigid rods. Just as logs being floated down a river tend to travel parallel to the direction of the river, liquid crystals have a preferential alignment with each other. At high temperatures, this alignment is disrupted and the material is said to be in the isotropic state. At lower temperatures, the alignment will take place and the liquid crystals are said to be in the pneumatic state [Hoong.C.C]. Crystallography
https://en.wikipedia.org/wiki/Rado%20graph
In the mathematical field of graph theory, the Rado graph, Erdős–Rényi graph, or random graph is a countably infinite graph that can be constructed (with probability one) by choosing independently at random for each pair of its vertices whether to connect the vertices by an edge. The names of this graph honor Richard Rado, Paul Erdős, and Alfréd Rényi, mathematicians who studied it in the early 1960s; it appears even earlier in the work of . The Rado graph can also be constructed non-randomly, by symmetrizing the membership relation of the hereditarily finite sets, by applying the BIT predicate to the binary representations of the natural numbers, or as an infinite Paley graph that has edges connecting pairs of prime numbers congruent to 1 mod 4 that are quadratic residues modulo each other. Every finite or countably infinite graph is an induced subgraph of the Rado graph, and can be found as an induced subgraph by a greedy algorithm that builds up the subgraph one vertex at a time. The Rado graph is uniquely defined, among countable graphs, by an extension property that guarantees the correctness of this algorithm: no matter which vertices have already been chosen to form part of the induced subgraph, and no matter what pattern of adjacencies is needed to extend the subgraph by one more vertex, there will always exist another vertex with that pattern of adjacencies that the greedy algorithm can choose. The Rado graph is highly symmetric: any isomorphism of its finite induced subgraphs can be extended to a symmetry of the whole graph. The first-order logic sentences that are true of the Rado graph are also true of almost all random finite graphs, and the sentences that are false for the Rado graph are also false for almost all finite graphs. In model theory, the Rado graph is an example of the unique countable model of an ω-categorical theory. History The Rado graph was first constructed by in two ways, with vertices either the hereditarily finite sets or the na
https://en.wikipedia.org/wiki/HoDoMS
HoDoMS (Heads of Departments of Mathematical Sciences) is an educational company that acts as a body to represent the heads of United Kingdom higher education departments of mathematical sciences. It aims to discuss and promote the interests of higher education mathematics in the UK and to facilitate dialogue between departments. Governance HoDoMS is operated by a committee including four officer roles which are listed below with incumbents. The committee includes observers from the Institute of Mathematics and its Applications, The OR Society, the Royal Statistical Society, the Council for the Mathematical Sciences and the Edinburgh Mathematical Society. Activities The main activity of HoDoMS is to run an annual conference bringing members together for briefings and discussion on current issues. For example, the 2020 conference heard briefings on policy issues such as research funding, the Research Excellence Framework 2021, the Teaching Excellence Framework as well as practicalities such as online marking, knowledge exchange, teaching as a career for mathematics undergraduates, and academics and mental health. HoDoMS also collaborates with other organisations, for example with the London Mathematical Society on an 'Education Day' in 2019 and with the Institute of Mathematics and its Applications and the Isaac Newton Institute on an 'Induction Course for New Lecturers in the Mathematical Sciences' in 2021 History The first meeting of HoDoMS took place on 14th September 1995 at University College, London under its first chair, Graham Wilks. On 14th August 2018, HoDoMS was incorporated as a Private company limited by guarantee. Affiliations HoDoMS is a member of the Joint Mathematical Council of the United Kingdom (JMC).
https://en.wikipedia.org/wiki/Association%20of%20Teachers%20of%20Mathematics
The Association of Teachers of Mathematics (ATM) was established by Caleb Gattegno in 1950 to encourage the development of mathematics education to be more closely related to the needs of the learner. ATM is a membership organisation representing a community of students, nursery, infant, primary, secondary and tertiary teachers, numeracy consultants, overseas teachers, academics and anybody interested in mathematics education. Aims The stated aims of the Association of Teachers of Mathematics are to support the teaching and learning of mathematics by: encouraging increased understanding and enjoyment of mathematics encouraging increased understanding of how people learn mathematics encouraging the sharing and evaluation of teaching and learning strategies and practices promoting the exploration of new ideas and possibilities initiating and contributing to discussion of and developments in mathematics education at all levels Guiding principles ATM lists as its guiding principles: The ability to operate mathematically is an aspect of human functioning which is as universal as language itself. Attention needs constantly to be drawn to this fact. Any possibility of intimidating with mathematical expertise is to be avoided. The power to learn rests with the learner. Teaching has a subordinate role. The teacher has a duty to seek out ways to engage the power of the learner. It is important to examine critically approaches to teaching and to explore new possibilities, whether deriving from research, from technological developments or from the imaginative and insightful ideas of others. Teaching and learning are cooperative activities. Encouraging a questioning approach and giving due attention to the ideas of others are attitudes to be encouraged. Influence is best sought by building networks of contacts in professional circles. Structure There are about 3500 members, mainly teachers in primary and secondary schools. It is a registered charity and all profits
https://en.wikipedia.org/wiki/Non-contact%20force
A non-contact force is a force which acts on an object without coming physically in contact with it. The most familiar non-contact force is gravity, which confers weight. In contrast, a contact force is a force which acts on an object coming physically in contact with it. All four known fundamental interactions are non-contact forces: Gravity, the force of attraction that exists among all bodies that have mass. The force exerted on each body by the other through weight is proportional to the mass of the first body times the mass of the second body divided by the square of the distance between them. Electromagnetism is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields. Examples of this force include: electricity, magnetism, radio waves, microwaves, infrared, visible light, X-rays and gamma rays. Electromagnetism mediates all chemical, biological, electrical and electronic processes. Strong nuclear force: Unlike gravity and electromagnetism, the strong nuclear force is a short distance force that takes place between fundamental particles within a nucleus. It is charge independent and acts equally between a proton and a proton, a neutron and a neutron, and a proton and a neutron. The strong nuclear force is the strongest force in nature; however, its range is small (acting only over distances of the order of 10−15 m). The strong nuclear force mediates both nuclear fission and fusion reactions. Weak nuclear force: The weak nuclear force mediates the β decay of a neutron, in which the neutron decays into a proton and in the process emits a β particle and an uncharged particle called a neutrino. As a result of mediating the β decay process, the weak nuclear force plays a key role in supernovas. Both the strong and weak forces form an important part of quantum mechanics.The Casimir effect could also be thought of as a non-contact force. See also Tension Body force Surface
https://en.wikipedia.org/wiki/Signature%20%28logic%29
In logic, especially mathematical logic, a signature lists and describes the non-logical symbols of a formal language. In universal algebra, a signature lists the operations that characterize an algebraic structure. In model theory, signatures are used for both purposes. They are rarely made explicit in more philosophical treatments of logic. Definition Formally, a (single-sorted) signature can be defined as a 4-tuple where and are disjoint sets not containing any other basic logical symbols, called respectively function symbols (examples: ), s or predicates (examples: ), constant symbols (examples: ), and a function which assigns a natural number called arity to every function or relation symbol. A function or relation symbol is called -ary if its arity is Some authors define a nullary (-ary) function symbol as constant symbol, otherwise constant symbols are defined separately. A signature with no function symbols is called a , and a signature with no relation symbols is called an . A is a signature such that and are finite. More generally, the cardinality of a signature is defined as The is the set of all well formed sentences built from the symbols in that signature together with the symbols in the logical system. Other conventions In universal algebra the word or is often used as a synonym for "signature". In model theory, a signature is often called a , or identified with the (first-order) language to which it provides the non-logical symbols. However, the cardinality of the language will always be infinite; if is finite then will be . As the formal definition is inconvenient for everyday use, the definition of a specific signature is often abbreviated in an informal way, as in: "The standard signature for abelian groups is where is a unary operator." Sometimes an algebraic signature is regarded as just a list of arities, as in: "The similarity type for abelian groups is " Formally this would define the function symbols of the
https://en.wikipedia.org/wiki/Profile%20%28UML%29
A profile in the Unified Modeling Language (UML) provides a generic extension mechanism for customizing UML models for particular domains and platforms. Extension mechanisms allow refining standard semantics in strictly additive manner, preventing them from contradicting standard semantics. Profiles are defined using stereotypes, tag definitions, and constraints which are applied to specific model elements, like Classes, Attributes, Operations, and Activities. A Profile is a collection of such extensions that collectively customize UML for a particular domain (e.g., aerospace, healthcare, financial) or platform (J2EE, .NET). Examples The UML Profile for XML is defined by David Carlson in the book "Modeling XML Applications with UML" pp. 310 and describes a set of extensions to basic UML model elements to enable accurate modeling of XSD schemas. SysML is an Object Management Group (OMG)-standardized profile of Unified Modeling Language which is used for system engineering applications. MARTE is the OMG standard for modelling real-time and embedded applications with UML2. The UML profile for relationships (see also ) is based on RM-ODP and provides precise specifications of the semantics of UML concepts used to specify generic (not necessarily binary) relationships such as composition and subtyping. See also Stereotype (UML) Footnotes
https://en.wikipedia.org/wiki/Cork%20encoding
The Cork (also known as T1 or EC) encoding is a character encoding used for encoding glyphs in fonts. It is named after the city of Cork in Ireland, where during a TeX Users Group (TUG) conference in 1990 a new encoding was introduced for LaTeX. It contains 256 characters supporting most west- and east-European languages with the Latin alphabet. Details In 8-bit TeX engines the font encoding has to match the encoding of hyphenation patterns where this encoding is most commonly used. In LaTeX one can switch to this encoding with \usepackage[T1]{fontenc}, while in ConTeXt MkII this is the default encoding already. In modern engines such as XeTeX and LuaTeX Unicode is fully supported and the 8-bit font encodings are obsolete. Character set Notes Hexadecimal values under the characters in the table are the Unicode character codes. The first 12 characters are often used as combining characters. Supported languages The encoding supports most European languages written in Latin alphabet. Notable exceptions are: Esperanto (using IL3) Latvian language and Lithuanian language (using L7X) Welsh language Languages with slightly suboptimal support include: Galician language, Portuguese language and Spanish language – due to the lack of characters ª and º, which are not superscript versions of lowercase "a" and "o" (superscripts are thinner) and they are often underlined Croatian language, Bosnian language, Serbian language – due to the shared use of the slot for Đ Turkish language – due to dotless i having different uppercase and lowercase combinations than in other languages
https://en.wikipedia.org/wiki/Donald%20Rubin
Donald Bruce Rubin (born December 22, 1943) is an Emeritus Professor of Statistics at Harvard University, where he chaired the department of Statistics for 13 years. He also works at Tsinghua University in China and at Temple University in Philadelphia. He is most well known for the Rubin causal model, a set of methods designed for causal inference with observational data, and for his methods for dealing with missing data. In 1977 he was elected as a Fellow of the American Statistical Association. Biography Rubin was born in Washington, D.C. into a family of lawyers. As an undergraduate Rubin attended the accelerated Princeton University PhD program where he was one of a cohort of 20 students mentored by the physicist John Wheeler (the intention of the program was to confer degrees within 5 years of freshman matriculation). He switched to psychology and graduated in 1965. He began graduate school in psychology at Harvard with a National Science Foundation fellowship, but because his statistics background was considered insufficient, he was asked to take introductory statistics courses. Rubin became a PhD student again, this time in Statistics under William Cochran at the Harvard Statistics Department. After graduating from Harvard in 1970, he began working at the Educational Testing Service in 1971, and served as a visiting faculty member at Princeton's new statistics department. He published his major papers on the Rubin causal model in 1974–1980, seminal papers on propensity score matching in the early 1980s with Paul Rosenbaum, and a textbook on the subject with Nobel prize winning econometrician Guido Imbens in 2015.
https://en.wikipedia.org/wiki/Mimotope
A mimotope is often a peptide, and mimics the structure of an epitope. Because of this property it causes an antibody response similar to the one elicited by the epitope. An antibody for a given epitope antigen will recognize a mimotope which mimics that epitope. Mimotopes are commonly obtained from phage display libraries through biopanning. Vaccines utilizing mimotopes are being developed. Mimotopes are a kind of peptide aptamers. When the term mimotope was first coined by Mario Geysen in 1986, it was used to describe peptides mimicking epitopes. However, this concept has been extended to refer peptide mimic of all types of binding sites. As the mimic of binding site, mimotope analysis has been widely used in mapping epitopes, identifying drug target and inferring protein interaction networks. Furthermore, mimotope has also shown its potential in the development of new diagnostics, therapeutics and vaccines. In addition, special affinities mediated by mimotopes to various semiconductors and other materials have shown very encouraging promise in new material and new energy studies. Gathering information on mimotopes into a special database therefore deserves. In 2010, the MimoDB database version 1.0 was released. It had 10716 peptides grouped into 1229 sets. These peptides were extracted from biopanning results of phage-displayed random peptide libraries reported in 571 papers. The MimoDB database has been updated to the current version 2.0 very recently. In version 2.0, it has 15633 peptides collected from 849 papers and grouped into 1818 sets. Besides the core data on panning experiments and their results, broad background information on target, template, library and structure is included. An accompanied benchmark has also been compiled for bioinformaticians to develop and evaluate their new models, algorithms and programs. In addition, the MimoDB database provides tools for simple and advanced searches, structure visualization, BLAST and alignment view on the
https://en.wikipedia.org/wiki/IFAE
The Institute for High Energy Physics (Institut de Fisica d'Altes Energies, IFAE) of Barcelona, Spain, is a Public Consortium between the Generalitat de Catalunya (Government of the Autonomous Community of Catalonia) and the Universitat Autònoma de Barcelona (UAB). It was formally created on July 16, 1991, by Act number 159/1991 of the Generalitat. As an organization it is independent from both the UAB and the Generalitat, ruled by its own Statutes, and governed by a Governing Board. It is located on the campus of UAB in Bellaterra, Barcelona. The IFAE has its own Titular Personnel as well as Associated Personnel consisting on members of the Physics Department of UAB working on Particle Physics. It has also an agreement with Universitat de Barcelona (dated 8/7/1992) which also allows the faculty of that university working on Particle Physics to be Associated Personnel of IFAE. The IFAE is also ascribed to the UAB as a University Institute (Act number 231/1995 of the Generalitat) which allows its members to teach in its Doctoral Programme in Physics, and thus benefits from a productive symbiotic relationship with this major education and research institution. The institute is dedicated to forefront experimental and theoretical research in the fields of high energy physics and high energy astrophysics as well as in related technologies. Current projects Detector construction, algorithm development and Monte Carlo simulation for the ATLAS experiment at the Large Hadron Collider (LHC) under construction at the CERN; Experiment design and construction of the MAGIC high energy gamma-ray telescope, under operation since 2004 in the Canary Islands, Spain; Design of the Cherenkov Telescope Array, a matrix of Cherenkov Telescopes ten times more sensitive than MAGIC. Development of a novel X-ray detector technique for use in medical imaging; Analysis of data from the T2K experiment in Japan. Development of an Information Data Grid in connection with the LHC computing; A nu
https://en.wikipedia.org/wiki/Earth%20tide
Earth tide (also known as solid-Earth tide, crustal tide, body tide, bodily tide or land tide) is the displacement of the solid earth's surface caused by the gravity of the Moon and Sun. Its main component has meter-level amplitude at periods of about 12 hours and longer. The largest body tide constituents are semi-diurnal, but there are also significant diurnal, semi-annual, and fortnightly contributions. Though the gravitational force causing earth tides and ocean tides is the same, the responses are quite different. Tide raising force The larger of the periodic gravitational forces is from the Moon but that of the Sun is also important. The images here show lunar tidal force when the Moon appears directly over 30° N (or 30° S). This pattern remains fixed with the red area directed toward (or directly away from) the Moon. Red indicates upward pull, blue downward. If, for example the Moon is directly over 90° W (or 90° E), the red areas are centred on the western northern hemisphere, on upper right. Red up, blue down. If for example the Moon is directly over 90° W (90° E), the centre of the red area is 30° N, 90° W and 30° S, 90° E, and the centre of the bluish band follows the great circle equidistant from those points. At 30° latitude a strong peak occurs once per lunar day, giving a significant diurnal force at that latitude. Along the equator two equally sized peaks (and depressions) impart semi-diurnal force. Body tide components The Earth tide encompasses the entire body of the Earth and is unhindered by the thin crust and land masses of the surface, on scales that make the rigidity of rock irrelevant. Ocean tides are a consequence of tangent forces (see: equilibrium tide) and the resonance of the same driving forces with water movement periods in ocean basins accumulated over many days, so that their amplitude and timing are quite different and vary over short distances of just a few hundred kilometres. The oscillation periods of the Earth as a who
https://en.wikipedia.org/wiki/Non-logical%20symbol
In logic, the formal languages used to create expressions consist of symbols, which can be broadly divided into constants and variables. The constants of a language can further be divided into logical symbols and non-logical symbols (sometimes also called logical and non-logical constants). The non-logical symbols of a language of first-order logic consist of predicates and individual constants. These include symbols that, in an interpretation, may stand for individual constants, variables, functions, or predicates. A language of first-order logic is a formal language over the alphabet consisting of its non-logical symbols and its logical symbols. The latter include logical connectives, quantifiers, and variables that stand for statements. A non-logical symbol only has meaning or semantic content when one is assigned to it by means of an interpretation. Consequently, a sentence containing a non-logical symbol lacks meaning except under an interpretation, so a sentence is said to be true or false under an interpretation. These concepts are defined and discussed in the article on first-order logic, and in particular the section on syntax. The logical constants, by contrast, have the same meaning in all interpretations. They include the symbols for truth-functional connectives (such as "and", "or", "not", "implies", and logical equivalence) and the symbols for the quantifiers "for all" and "there exists". The equality symbol is sometimes treated as a non-logical symbol and sometimes treated as a symbol of logic. If it is treated as a logical symbol, then any interpretation will be required to interpret the equality sign using true equality; if interpreted as a non-logical symbol, it may be interpreted by an arbitrary equivalence relation. Signatures A signature is a set of non-logical constants together with additional information identifying each symbol as either a constant symbol, or a function symbol of a specific arity n (a natural number), or a relation
https://en.wikipedia.org/wiki/Urine%20collection%20device
A urine collection device or UCD is a device that allows the collection of urine for analysis (as in medical or forensic urinalysis) or for purposes of simple elimination (as in vehicles engaged in long voyages and not equipped with toilets, particularly aircraft and spacecraft). UCDs of the latter type are sometimes called piddle packs. Similar devices are used, primarily by men, to manage urinary incontinence. These devices attached to the outside of the penile area and direct urine into a separate collection chamber such as a leg or bedside bag. There are several varieties of external urine collection devices on the market today including male external catheters also known as urisheaths or Texas/condom catheters, urinals and hydrocolloid-based devices. External products should not be used by any individual who experiences urinary retention without overflow incontinence. Description A urine collection device allows an individual to empty their bladder into a container hygienically and without spilling urine. Condom catheters Condom catheters, also known as male external catheters, urisheaths, or Texas catheters, are made of silicone or latex (depending on the brand/manufacturer) and cover the penis just like a condom but with an opening at the end to allow the connection to the urine bag. The sheath is worn over the penis and looks like a condom (hence the name). It stays in place by use of an adhesive, that can either be built into the sheath or come as a separate adhesive liner. The urine gets funneled away from the body, keeping the skin dry at all times. The urine runs into a urine bag that is attached at the bottom of the external catheter. During the day, a drainable leg bag can be used, and at night it is recommended to use a large-capacity bedside drainage bag. Male external catheters are designed to be worn 24/7 and changed daily – and can be used by men with both light and severe incontinence. Male external catheters come in several sizes and len
https://en.wikipedia.org/wiki/Ian%20Sommerville%20%28software%20engineer%29
Ian F. Sommerville (born 23 February 1951), is a British academic. He is the author of a popular student textbook on software engineering, as well as a number of other books and papers. He worked as a professor of software engineering at the University of St Andrews in Scotland until 2014 and is a prominent researcher in the field of systems engineering, system dependability and social informatics, being an early advocate of an interdisciplinary approach to system dependability. Education and personal life Ian Sommerville was born in Glasgow, Scotland in 1951. He studied Physics at Strathclyde University and Computer Science at the University of St Andrews. He is married and has two daughters. As an amateur gourmet, he has written a number of restaurant reviews. Academic career Ian Sommerville was a lecturer in Computer Science at Heriot-Watt University in Edinburgh, Scotland from 1975 to 1978 and at Strathclyde University, Glasgow from 1978 to 1986. From 1986 to 2006, he was Professor of Software Engineering in the Computing Department at the University of Lancaster, and in April 2006 he joined the School of Computer Science at St Andrews University, where he taught courses in advanced software engineering and critical systems engineering. He retired in January 2014 and since continues to do software-related things that he finds interesting. Ian Sommerville's research work, partly funded by the EPSRC has included systems requirements engineering and system evolution. He defined the process of Construction by configuration (CbC). A major focus has been system dependability, including the use of social analysis techniques such as ethnography to better understand how people and computers deliver dependability. He was a partner in the DIRC (Interdisciplinary Research Collaboration in Dependability) consortium, which focused on dependable systems design and is now (2006) working on the related INDEED (Interdisciplinary Design and Evaluation of Dependability) project
https://en.wikipedia.org/wiki/MRNA%20display
mRNA display is a display technique used for in vitro protein, and/or peptide evolution to create molecules that can bind to a desired target. The process results in translated peptides or proteins that are associated with their mRNA progenitor via a puromycin linkage. The complex then binds to an immobilized target in a selection step (affinity chromatography). The mRNA-protein fusions that bind well are then reverse transcribed to cDNA and their sequence amplified via a polymerase chain reaction. The result is a nucleotide sequence that encodes a peptide with high affinity for the molecule of interest. Puromycin is an analogue of the 3’ end of a tyrosyl-tRNA with a part of its structure mimics a molecule of adenosine, and the other part mimics a molecule of tyrosine. Compared to the cleavable ester bond in a tyrosyl-tRNA, puromycin has a non-hydrolysable amide bond. As a result, puromycin interferes with translation, and causes premature release of translation products. All mRNA templates used for mRNA display technology have puromycin at their 3’ end. As translation proceeds, ribosome moves along the mRNA template, and once it reaches the 3’ end of the template, the fused puromycin will enter ribosome’s A site and be incorporated into the nascent peptide. The mRNA-polypeptide fusion is then released from the ribosome (Figure 1). To synthesize an mRNA-polypeptide fusion, the fused puromycin is not the only modification to the mRNA template. Oligonucleotides and other spacers need to be recruited along with the puromycin to provide flexibility and proper length for the puromycin to enter the A site. Ideally, the linker between the 3’ end of an mRNA and the puromycin has to be flexible and long enough to allow the puromycin to enter the A site upon translation of the last codon. This enables the efficient production of high-quality, full-length mRNA-polypeptide fusion. Rihe Liu et al. optimized the 3’-puromycin oligonucleotide spacer. They reported that dA25 in
https://en.wikipedia.org/wiki/WAP%20gateway
A WAP gateway sits between mobile devices using the Wireless Application Protocol (WAP) and the World Wide Web, passing pages from one to the other much like a proxy. This translates pages into a form suitable for the mobiles, for instance using the Wireless Markup Language (WML). This process is hidden from the phone, so it may access the page in the same way as a browser accesses HTML, using a URL (for example, http://example.com/foo.wml), provided the mobile phone operator has not specifically prevented this. WAP gateway software encodes and decodes requests and responses between the smartphones, microbrowser and internet. It decodes the encoded WAP requests from the microbrowser and send the HTTP requests to the internet or to a local application server. It also encodes the WML and HDML data returning from the web for transmission to the microbrowser in the handset.
https://en.wikipedia.org/wiki/Literal%20%28mathematical%20logic%29
In mathematical logic, a literal is an atomic formula (also known as an atom or prime formula) or its negation. The definition mostly appears in proof theory (of classical logic), e.g. in conjunctive normal form and the method of resolution. Literals can be divided into two types: A positive literal is just an atom (e.g., ). A negative literal is the negation of an atom (e.g., ). The polarity of a literal is positive or negative depending on whether it is a positive or negative literal. In logics with double negation elimination (where ) the complementary literal or complement of a literal can be defined as the literal corresponding to the negation of . We can write to denote the complementary literal of . More precisely, if then is and if then is . Double negation elimination occurs in classical logics but not in intuitionistic logic. In the context of a formula in the conjunctive normal form, a literal is pure if the literal's complement does not appear in the formula. In Boolean functions, each separate occurrence of a variable, either in inverse or uncomplemented form, is a literal. For example, if , and are variables then the expression contains three literals and the expression contains four literals. However, the expression would also be said to contain four literals, because although two of the literals are identical ( appears twice) these qualify as two separate occurrences. Examples In propositional calculus a literal is simply a propositional variable or its negation. In predicate calculus a literal is an atomic formula or its negation, where an atomic formula is a predicate symbol applied to some terms, with the terms recursively defined starting from constant symbols, variable symbols, and function symbols. For example, is a negative literal with the constant symbol 2, the variable symbols x, y, the function symbols f, g, and the predicate symbol Q.
https://en.wikipedia.org/wiki/Parry%E2%80%93Romberg%20syndrome
Parry–Romberg syndrome (PRS) is a rare disease characterized by progressive shrinkage and degeneration of the tissues beneath the skin, usually on only one side of the face (hemifacial atrophy) but occasionally extending to other parts of the body. An autoimmune mechanism is suspected, and the syndrome may be a variant of localized scleroderma, but the precise cause and pathogenesis of this acquired disorder remains unknown. It has been reported in the literature as a possible consequence of sympathectomy. The syndrome has a higher prevalence in females and typically appears between 5 and 15 years of age. In addition to the connective tissue disease, the condition is sometimes accompanied by neurological, ocular and oral symptoms. The range and severity of associated symptoms and findings are highly variable. Signs and symptoms Skin and connective tissues Initial facial changes usually involve the area of the face covered by the temporal or buccinator muscles. The disease progressively spreads from the initial location, resulting in atrophy of the skin and its adnexa, as well as underlying subcutaneous structures such as connective tissue, (fat, fascia, cartilage, bones) and/or muscles of one side of the face. The mouth and nose are typically deviated towards the affected side of the face. The process may eventually extend to involve tissues between the nose and the upper corner of the lip, the upper jaw, the angle of the mouth, the area around the eye and brow, the ear, and/or the neck. The syndrome often begins with a circumscribed patch of scleroderma in the frontal region of the scalp which is associated with a loss of hair and the appearance of a depressed linear scar extending down through the midface on the affected side. This scar is referred to as a "coup de sabre" lesion because it resembles the scar of a wound made by a sabre, and is indistinguishable from the scar observed in frontal linear scleroderma. In 20% of cases, the hair and skin overlying
https://en.wikipedia.org/wiki/Luna%20E-8-5%20No.%20402
Luna E-8-5 No.402, also known as Luna Ye-8-5 No.402, and sometimes identified by NASA as Luna 1969C, was a Soviet spacecraft under Luna programme which was lost in a launch failure in 1969. It was a Luna E-8-5 spacecraft, the first of at least eleven to be launched. It was intended to perform a soft landing on the Moon, collect a sample of lunar soil, and return it to the Earth. It was, along with Luna 15, one of two unsuccessful missions which had been launched by the Soviet Union in a last-ditch attempt to upstage the Apollo 11 landing under Moon race. Luna E-8-5 No.402 was launched at 04:00:07 UTC on 14 June 1969 atop a Proton-K 8K78K carrier rocket with a Blok D upper stage, flying from Site 81/24 at the Baikonur Cosmodrome. The upper stage failed to ignite, and consequently the spacecraft failed to achieve orbit. Prior to the release of information about its mission, NASA correctly identified that it had been an attempted sample return mission. However, they believed that a previous attempt had been made, using a spacecraft launched on 30 April, which had also been lost in a launch failure. They designated that attempt was Luna 1969B. No Luna spacecraft or Proton rocket was launched on that date.
https://en.wikipedia.org/wiki/Luna%20E-8-5%20No.%20405
Luna E-8-5 No.405, also known as Luna Ye-8-5 No.405, and sometimes identified by NASA as Luna 1970A, was a Soviet spacecraft which was lost in a launch failure in 1970. It was a Luna E-8-5 spacecraft, the fifth of eight to be launched. It was intended to perform a soft landing on the Moon, collect a sample of lunar soil, and return it to the Earth. Launch Luna E-8-5 No.405 was launched at 04:16:06 UTC on 6 February 1970 atop a Proton-K 8K78K carrier rocket with a Blok-D upper stage, flying from Site 81/23 at the Baikonur Cosmodrome. A defective pressure sensor caused the first stage to shut down 128 seconds after launch. The booster crashed downrange. Prior to the release of information about its mission, NASA correctly identified that it had been an attempted sample return mission.
https://en.wikipedia.org/wiki/Spreadsort
Spreadsort is a sorting algorithm invented by Steven J. Ross in 2002. It combines concepts from distribution-based sorts, such as radix sort and bucket sort, with partitioning concepts from comparison sorts such as quicksort and mergesort. In experimental results it was shown to be highly efficient, often outperforming traditional algorithms such as quicksort, particularly on distributions exhibiting structure and string sorting. There is an open-source implementation with performance analysis and benchmarks, and HTML documentation . Quicksort identifies a pivot element in the list and then partitions the list into two sublists, those elements less than the pivot and those greater than the pivot. Spreadsort generalizes this idea by partitioning the list into n/c partitions at each step, where n is the total number of elements in the list and c is a small constant (in practice usually between 4 and 8 when comparisons are slow, or much larger in situations where they are fast). It uses distribution-based techniques to accomplish this, first locating the minimum and maximum value in the list, and then dividing the region between them into n/c equal-sized bins. Where caching is an issue, it can help to have a maximum number of bins in each recursive division step, causing this division process to take multiple steps. Though this causes more iterations, it reduces cache misses and can make the algorithm run faster overall. In the case where the number of bins is at least the number of elements, spreadsort degenerates to bucket sort and the sort completes. Otherwise, each bin is sorted recursively. The algorithm uses heuristic tests to determine whether each bin would be more efficiently sorted by spreadsort or some other classical sort algorithm, then recursively sorts the bin. Like other distribution-based sorts, spreadsort has the weakness that the programmer is required to provide a means of converting each element into a numeric key, for the purpose of identifyin
https://en.wikipedia.org/wiki/Text%20simplification
Text simplification is an operation used in natural language processing to change, enhance, classify, or otherwise process an existing body of human-readable text so its grammar and structure is greatly simplified while the underlying meaning and information remain the same. Text simplification is an important area of research because of communication needs in an increasingly complex and interconnected world more dominated by science, technology, and new media. But natural human languages pose huge problems because they ordinarily contain large vocabularies and complex constructions that machines, no matter how fast and well-programmed, cannot easily process. However, researchers have discovered that, to reduce linguistic diversity, they can use methods of semantic compression to limit and simplify a set of words used in given texts. Example Text simplification is illustrated with an example used by Siddharthan (2006). The first sentence contains two relative clauses and one conjoined verb phrase. A text simplification system aims to change the first sentence into a group of simpler sentences, as seen just below the first sentence. Also contributing to the firmness in copper, the analyst noted, was a report by Chicago purchasing agents, which precedes the full purchasing agents report that is due out today and gives an indication of what the full report might hold. Also contributing to the firmness in copper, the analyst noted, was a report by Chicago purchasing agents. The Chicago report precedes the full purchasing agents report. The Chicago report gives an indication of what the full report might hold. The full report is due out today. One approach to text simplification is lexical simplification via lexical substitution, a two-step process of first identifying complex words and then replacing them with simpler synonyms. A key challenge here is identifying complex words, which is performed by a machine learning classifier trained on labeled data. Researcher
https://en.wikipedia.org/wiki/Write%20precompensation
Write precompensation (abbreviated WPcom in the literature) is a technical aspect of the design of hard disks, floppy disks and other digital magnetic recording devices. It is the modification of the analog write signal, shifting transitions somewhat in time, in such a way as to ensure that the signal that will later be read back will be as close as possible to the unmodified write signal. It is required because of the non-linear properties of magnetic recording surfaces. A higher amount of precompensation is needed to write data in sectors that are closer to the center of the disk. In constant angular velocity (CAV) recording, in which the disk spins at a constant speed no matter where the data is written, the sectors closest to the spindle are packed tighter than the outer sectors and so require a slightly different timing to write the data in the most reliable way. CAV recording is used by most floppy disk systems and by older hard disk systems; the term CAV is not applicable to non-circular media, such as magnetic tapes. On magnetic tapes, precompensation is usually constant throughout the tape. History In the past one of the hard disk parameters stored in a PC's CMOS memory was the WPcom number, a marker of the track where stronger precompensation begins, i.e. the transitions are shifted further in time. This was needed by the old MFM and RLL hard disk controllers in common use until the early 1990s. These controllers were usually housed on plug-in cards which could be plugged into the mainboard of the computer; in any case they were external to the actual drive and could deal with many different drives; thus they needed to be told some parameters about the particular drive type in use by the computer. One of these parameters was the WPcom number. This scheme allowed only two different precompensation strengths per disk, a lower one for the outer tracks and a higher one for the inner tracks, however this was enough for the simple low capacity drives of those
https://en.wikipedia.org/wiki/Internet%20band
An Internet band, also called an online band, is a musical group whose members collaborate online through broadband by utilizing a content management system and local digital audio workstations. The work is sometimes released under a Creative Commons license, so musicians can share their "samples" to create collaborative musical expressions for noncommercial purposes without ever meeting face to face. History Cathedral In March 1996, Nora Farrell and William Duckworth began to develop Cathedral, one of the first interactive works of music and art on the World Wide Web. Their aim was to create an interactive website with web-based musical instruments that anyone could play. Also in the preliminary stages of the system, they determined that they wanted to make a place on the Web for acoustic music through a series of live webcasts and performances online. In June 1997, Cathedral went live online. The first build of this system included streaming audio, streaming video, animation, images, and text. At the time, there were fewer than one million sites on the World Wide Web and fewer than 2% of those made any sounds. According to the creators of Cathedral, their goals at that point were: To create an imaginative, ongoing artistic experience that builds community. To blur the distinctions that separate composers, performers, and audiences. To offer each individual listener the ability to create his or her own unique work online. By 2003, Cathedral consisted of three primary components: A website that featured a variety of interactive musical, artistic, and text-based experiences. A group of virtual instruments that allowed listeners to participate actively and creatively. An Internet band that gave periodic live performances and offered listeners focused moments in which to come together and play music in community online. Current scenario Social networking sites have gained a large number of users because many aspects of society revolve around computers a
https://en.wikipedia.org/wiki/Timelike%20simply%20connected
Suppose a Lorentzian manifold contains a closed timelike curve (CTC). No CTC can be continuously deformed as a CTC (is timelike homotopic) to a point, as that point would not be causally well behaved. Therefore, any Lorentzian manifold containing a CTC is said to be timelike multiply connected. A Lorentzian manifold that does not contain a CTC is said to be timelike simply connected. Any Lorentzian manifold which is timelike multiply connected has a diffeomorphic universal covering space which is timelike simply connected. For instance, a three-sphere with a Lorentzian metric is timelike multiply connected, (because any compact Lorentzian manifold contains a CTC), but has a diffeomorphic universal covering space which contains no CTC (and is therefore not compact). By contrast, a three-sphere with the standard metric is simply connected, and is therefore its own universal cover.
https://en.wikipedia.org/wiki/Siau%20scops%20owl
The Siau scops owl (Otus siaoensis) is a critically endangered owl species. They live on Siau Island, north of Sulawesi, Indonesia and are (were) forest dwellers. The species is only known from a single holotype from 1866 although there have been more recent potential sightings, including one in 2017. Even so, their habitat is being lost to excessive logging of the forest on the island and there would be very few if any individuals left. The taxonomic arrangement for this owl has not been fully worked out. While recognized as a distinct species by the IOC, others consider it as a subspecies of either the Sulawesi scops owl or Moluccan scops owl. On December 14, 2017, a video of a purported Siau Scops-Owl trapped in a building was uploaded to YouTube (see external links). Although it has been the subject of debate, the scientific community has not been able to confirm the bird's identification.
https://en.wikipedia.org/wiki/Vehicle%20routing%20problem
The vehicle routing problem (VRP) is a combinatorial optimization and integer programming problem which asks "What is the optimal set of routes for a fleet of vehicles to traverse in order to deliver to a given set of customers?" It generalises the travelling salesman problem (TSP). It first appeared in a paper by George Dantzig and John Ramser in 1959, in which the first algorithmic approach was written and was applied to petrol deliveries. Often, the context is that of delivering goods located at a central depot to customers who have placed orders for such goods. The objective of the VRP is to minimize the total route cost. In 1964, Clarke and Wright improved on Dantzig and Ramser's approach using an effective greedy algorithm called the savings algorithm. Determining the optimal solution to VRP is NP-hard, so the size of problems that can be optimally solved using mathematical programming or combinatorial optimization may be limited. Therefore, commercial solvers tend to use heuristics due to the size and frequency of real world VRPs they need to solve. VRP has many direct applications in industry. Vendors of VRP routing tools often claim that they can offer cost savings of 5%–30%. Setting up the problem The VRP concerns the service of a delivery company. How things are delivered from one or more depots which has a given set of home vehicles and operated by a set of drivers who can move on a given road network to a set of customers. It asks for a determination of a set of routes, S, (one route for each vehicle that must start and finish at its own depot) such that all customers' requirements and operational constraints are satisfied and the global transportation cost is minimized. This cost may be monetary, distance or otherwise. The road network can be described using a graph where the arcs are roads and vertices are junctions between them. The arcs may be directed or undirected due to the possible presence of one way streets or different costs in each dire
https://en.wikipedia.org/wiki/Surface%20photovoltage
Surface photovoltage (SPV) measurements are a widely used method to determine the minority carrier diffusion length of semiconductors. Since the transport of minority carriers determines the behavior of the p-n junctions that are ubiquitous in semiconductor devices, surface photovoltage data can be very helpful in understanding their performance. As a contactless method, SPV is a popular technique for characterizing poorly understood compound semiconductors where the fabrication of ohmic contacts or special device structures may be difficult. Theory As the name suggests, SPV measurements involve monitoring the potential of a semiconductor surface while generating electron-hole pairs with a light source. The surfaces of semiconductors are often depletion regions (or space charge regions) where a built-in electric field due to defects has swept out mobile charge carriers. A reduced carrier density means that the electronic energy band of the majority carriers is bent away from the Fermi level. This band-bending gives rise to a surface potential. When a light source creates electron-hole pairs deep within the semiconductor, they must diffuse through the bulk before reaching the surface depletion region. The photogenerated minority carriers have a shorter diffusion length than the much more numerous majority carriers, with which they can radiatively recombine. The change in surface potential upon illumination is therefore a measure of the ability of minority carriers to reach the surface, namely the minority carrier diffusion length. As always in diffusive processes, the diffusion length is approximately related to the lifetime by the expression , where is the diffusion coefficient. The diffusion length is independent of any built-in fields in contrast to the drift behavior of the carriers. Note that the photogenerated majority carriers will also diffuse towards the surface but their number as a fraction of the thermally generated majority carrier d
https://en.wikipedia.org/wiki/Registered%20user
A registered user is a user of a website, program, or other systems who has previously registered. Registered users normally provide some sort of credentials (such as a username or e-mail address, and a password) to the system in order to prove their identity: this is known as logging in. Systems intended for use by the general public often allow any user to register simply by selecting a register or sign up function and providing these credentials for the first time. Registered users may be granted privileges beyond those granted to unregistered users. Rationale User registration and login enables a system to personalize itself. For example, a website might display a welcome banner with the user's name and change its appearance or behavior according to preferences indicated by the user. The system may also allow a logged-in user to send and receive messages, and to view and modify personnel files or other information. Criticism Privacy concerns Registration necessarily provides more personal information to a system than it would otherwise have. Even if the credentials used are otherwise meaningless, the system can distinguish a logged-in user from other users and might use this property to store a history of users' actions or activity, possibly without their knowledge or consent. While many systems have privacy policies, depending on the nature of the system, a user might not have any way of knowing for certain exactly what information is stored, how it is used, and with whom, if anyone, it is shared. A system could even sell information it has gathered on its users to third parties for advertising or other purposes. The subject of systems' transparency in this regard is one of ongoing debate. User inconvenience Registration may be seen as an annoyance or hindrance, especially if it is not inherently necessary or important (for example, in the context of a search engine) or if the system repeatedly prompts users to register. A system's registration process mig
https://en.wikipedia.org/wiki/Reproductive%20suppression
Reproductive suppression is the prevention or inhibition of reproduction in otherwise healthy adult individuals. It includes delayed sexual maturation (puberty) or inhibition of sexual receptivity, facultatively increased interbirth interval through delayed or inhibited ovulation or spontaneous or induced abortion, abandonment of immature and dependent offspring, mate guarding, selective destruction and worker policing of eggs in some eusocial insects or cooperatively breeding birds, and infanticide (see also infanticide (zoology)), and infanticide in carnivores of the offspring of subordinate females either by directly killing by dominant females or males in mammals or indirectly through the withholding of assistance with infant care in marmosets and some carnivores. The Reproductive Suppression Model argues that "females can optimize their lifetime reproductive success by suppressing reproduction when future (physical or social) conditions for the survival of offspring are likely to be greatly improved over present ones”. When intragroup competition (competition between individuals belonging to the same group) is high it may be beneficial to suppress the reproduction of others, and for subordinate females to suppress their own reproduction until a later time when social competition is reduced. This leads to reproductive skew within a social group, with some individuals having more offspring than others. The cost of reproductive suppression to the individual is lowest at the earliest stages of a reproductive event and reproductive suppression is often easiest to induce at the pre-ovulatory or earliest stages of pregnancy in mammals, and greatest after a birth. Therefore, neuroendocrine cues for assessing reproductive success should evolve to be reliable at early stages in the ovulatory cycle. Reproductive suppression occurs in its most extreme form in eusocial insects such as termites, hornets and bees and the mammalian naked mole rat which depend on a complex d
https://en.wikipedia.org/wiki/Salt%20equivalent
Salt equivalent is usually quoted on food nutrition information tables on food labels, and is a different way of defining sodium intake, noting that salt is chemically sodium chloride. To convert from sodium to the approximate salt equivalent, multiply sodium content by 2.5: (see: atomic mass and molecular mass). Sources British Nutrition Foundation article on salt Further reading Nutrition Equivalent units
https://en.wikipedia.org/wiki/Solidarity%20logo
The Solidarity logo designed by Jerzy Janiszewski in 1980 is considered as an important example of Polish Poster School creations. The logo was awarded the Grand Prix of the Biennale of Posters, Katowice 1981. By that time it was already well known in Poland and had become an internationally recognized icon. According to the artist, the letters were designed to represent united individuals. This characteristic font, colloquially known as solidaryca ("Solidaric"), was implemented many times in posters and other pieces of art in different contexts. Notable examples include a film poster for Man of Iron by Andrzej Wajda and, in 1989, a poster by Tomasz Sarnecki designed for the first (semi-)free elections in Poland.
https://en.wikipedia.org/wiki/Canberra%20Ornithologists%20Group
The Canberra Ornithologists Group (COG) was founded on 15 April 1970 when the ACT branch of the Royal Australasian Ornithologists Union (RAOU) became defunct following drastic reform within the RAOU in the late 1960s which abolished all its branches. It publishes a quarterly journal, Canberra Bird Notes, as well as a monthly newsletter, Gang-gang. Its aims are to: encourage interest in, and develop knowledge of, the birds of the Canberra region promote and co-ordinate the study of birds to promote the conservation of native birds and their habitat COG holds monthly meetings in Canberra as well as regular field excursions. The logo of COG is the gang-gang cockatoo.
https://en.wikipedia.org/wiki/S-knot
In loop quantum gravity, an s-knot is an equivalence class of spin networks under diffeomorphisms. In this formalism, s-knots represent the quantum states of the gravitational field. External links Living Reviews in Relativity: Loop Quantum Gravity: Diffeomorphism invariance Loop quantum gravity
https://en.wikipedia.org/wiki/Thomas%20Kirkman
Thomas Penyngton Kirkman FRS (31 March 1806 – 3 February 1895) was a British mathematician and ordained minister of the Church of England. Despite being primarily a churchman, he maintained an active interest in research-level mathematics, and was listed by Alexander Macfarlane as one of ten leading 19th-century British mathematicians. In the 1840s, he obtained an existence theorem for Steiner triple systems that founded the field of combinatorial design theory, while the related Kirkman's schoolgirl problem is named after him. Early life and education Kirkman was born 31 March 1806 in Bolton, in the north west of England, the son of a local cotton dealer. In his schooling at the Bolton Grammar School, he studied classics, but no mathematics was taught in the school. He was recognised as the best scholar at the school, and the local vicar guaranteed him a scholarship at Cambridge, but his father would not allow him to go. Instead, he left school at age 14 to work in his father's office. Nine years later, defying his father, he went to Trinity College Dublin, working as a private tutor to support himself during his studies. There, among other subjects, he first began learning mathematics. He earned a B.A. in 1833 and returned to England in 1835. Ordination and ministry On his return to England, Kirkman was ordained into the ministry of the Church of England and became the curate in Bury and then in Lymm. In 1839 he was invited to become rector of Croft with Southworth, a newly founded parish in Lancashire, where he would stay for 52 years until his retirement in 1892. Theologically, Kirkman supported the anti-literalist position of John William Colenso, and was also strongly opposed to materialism. He published many tracts and pamphlets on theology, as well as a book Philosophy Without Assumptions (1876). Kirkman married Eliza Wright in 1841; they had seven children. To support them, Kirkman supplemented his income with tutoring, until Eliza inherited enough prop
https://en.wikipedia.org/wiki/Vascular%20permeability
Vascular permeability, often in the form of capillary permeability or microvascular permeability, characterizes the capacity of a blood vessel wall to allow for the flow of small molecules (drugs, nutrients, water, ions) or even whole cells (lymphocytes on their way to the site of inflammation) in and out of the vessel. Blood vessel walls are lined by a single layer of endothelial cells. The gaps between endothelial cells (cell junctions) are strictly regulated depending on the type and physiological state of the tissue. There are several techniques to measure vascular permeability to certain molecules. For instance, the cannulation of a single microvessel with a micropipette, the microvessel is perfused with a certain pressure, occluded downstream and then the velocity of some cells will be related to the permeability. Another technique uses multiphoton fluorescence intravital microscopy through which the flow is related to fluorescence intensity and the permeability is estimated from the Patlak transformation. An example of increased vascular permeability is in the initial lesion of periodontal disease, in which the gingival plexus becomes engorged and dilated, allowing large numbers of neutrophils to extravasate and appear within the junctional epithelium and underlying connective tissue.
https://en.wikipedia.org/wiki/Gambling%20mathematics
The mathematics of gambling is a collection of probability applications encountered in games of chance and can get included in game theory. From a mathematical point of view, the games of chance are experiments generating various types of aleatory events, and it is possible to calculate by using the properties of probability on a finite space of possibilities. Experiments, events, and probability spaces The technical processes of a game stand for experiments that generate aleatory events. Here are a few examples: The occurrences could be defined; however, when formulating a probability problem, they must be done extremely carefully. From a mathematical point of view, the events are nothing more than subsets, and the space of events is a Boolean algebra. We find elementary and compound events, exclusive and nonexclusive events, and independent and non-independent events. In the experiment of rolling a die: Event {3, 5} (whose literal definition is the occurrence of 3 or 5) is compound because {3, 5}= {3} U {5}; Events {1}, {2}, {3}, {4}, {5}, {6} are elementary; Events {3, 5} and {4} are incompatible or exclusive because their intersection is empty; that is, they cannot occur simultaneously; Events {1, 2, 5} and {2, 5} are nonexclusive, because their intersection is not empty; In the experiment of rolling two dice one after another, the events obtaining "3" on the first die and obtaining "5" on the second die are independent because the occurrence of the first does not influence the occurrence of the second event, and vice versa. In the experiment of dealing the pocket cards in Texas Hold'em Poker: The event of dealing (3♣, 3♦) to a player is an elementary event; The event of dealing two 3's to a player is compound because it is the union of events (3♣, 3♠), (3♣, 3♥), (3♣, 3♦), (3♠, 3♥), (3♠, 3♦) and (3♥, 3♦); The events "player 1 is dealt a pair of kings" and "player 2 is dealt a pair of kings" are nonexclusive (they can both occur); The events play
https://en.wikipedia.org/wiki/C.%20Olin%20Ball
Charles Olin Ball (1893–1970) was an American food scientist and inventor who was involved in the thermal death time studies in the food canning industry during the early 1920s. This research was used as standard by the United States Food and Drug Administration for calculating thermal processes in canning. He was also a charter member of the Institute of Food Technologists in 1939 and inducted among the first class of its fellows in 1970 for his work in academia and industry. Biography A native of Abilene, Kansas, Ball earned his BS in mathematics before going to graduate school at George Washington University from 1919 to 1922. While at George Washington University, he worked for the National Canners Association by researching the sterilization of canned foods. Ball's formula method of thermal death time became the standard of the United States Food and Drug Administration for calculating thermal processes. After earning his PhD from George Washington University in 1922, Ball worked with the American Can Company in Illinois and New York where he earned 29 patents. He worked at Owens-Illinois Glass Company from 1944 to 1946 before going to Rutgers University as a professor and later chair of the food science department during 1949–1963. Institute of Food Technologists involvement Ball was a charter member of Institute of Food Technologists when it was founded in 1939, and its president in 1963–64. He won the Nicholas Appert Award in 1947, and was among the first class of 27 fellows inducted in 1970. Ball was the first editor-in-chief of Food Technology from 1947 to 1950. Death and legacy Ball died in 1970. Rutgers' food science department established an undergraduate scholarship in his honor for those students majoring in food science who excel in food engineering courses. Selected work Ball, C.O. and F.C.W. Olson (1957). Sterilization in Food Technology. 1st Edition. New York: McGraw-Hill Book Company.
https://en.wikipedia.org/wiki/Description%20number
Description numbers are numbers that arise in the theory of Turing machines. They are very similar to Gödel numbers, and are also occasionally called "Gödel numbers" in the literature. Given some universal Turing machine, every Turing machine can, given its encoding on that machine, be assigned a number. This is the machine's description number. These numbers play a key role in Alan Turing's proof of the undecidability of the halting problem, and are very useful in reasoning about Turing machines as well. An example of a description number Say we had a Turing machine M with states q1, ... qR, with a tape alphabet with symbols s1, ... sm, with the blank denoted by s0, and transitions giving the current state, current symbol, and actions performed (which might be to overwrite the current tape symbol and move the tape head left or right, or maybe not move it at all), and the next state. Under the original universal machine described by Alan Turing, this machine would be encoded as input to it as follows: The state qi is encoded by the letter 'D' followed by the letter 'A' repeated i times (a unary encoding) The tape symbol sj is encoded by the letter 'D' followed by the letter 'C' repeated j times The transitions are encoded by giving the state, input symbol, symbol to write on the tape, direction to move (expressed by the letters 'L', 'R', or 'N', for left, right, or no movement), and the next state to enter, with states and symbols encoded as above. The UTM's input thus consists of the transitions separated by semicolons, so its input alphabet consists of the seven symbols, 'D', 'A', 'C', 'L', 'R', 'N', and ';'. For example, for a very simple Turing machine that alternates printing 0 and 1 on its tape forever: State: q1, input symbol: blank, action: print 1, move right, next state: q2 State: q2, input symbol: blank, action: print 0, move right, next state: q1 Letting the blank be s0, '0' be s1 and '1' be s2, the machine would be encoded by the UTM as:
https://en.wikipedia.org/wiki/Orbicule
Orbicules (syn. Ubisch bodies, con-peito grains) are small acellular structures of sporopollenin that might occur on the inner tangential and radial walls of tapetal cells. The ornamentation of the orbicule surface often resembles that of the pollen sexine. Different hypotheses about their function have been proposed, including them just being a by-product of pollen wall sporopollenin synthesis. Discovery In 1865, Rosanoff published observations on anthers of Fabaceae species in which he noticed small granules on the inner locule wall that were resistant to concentrated sulphuric acid. Von Ubisch and Kosmath independently provided the first records of species with and without orbicules and indicated that orbicules are restricted to a secretory tapetum type. Von Ubish concluded that orbicules are homologous with the pollen exine, as both showed the same reaction to chemicals and stains and they developed synchronously. Both von Ubisch and Kosmath are considered as pioneers in orbicule research. Name Rosanoff used the terms Körnchen und Tröpfchen, while von Ubisch used Plättchen and Kosmath used kutikulaähnliche Tapetumzellmembran. The term Ubisch body was introduced by Rowley. This name was however later rejected by Heslop-Harrison because they were not discovered by von Ubisch. In early Japanese literature, they are sometimes called con-peito grains. However, the most commonly used name is orbicule, which was coined by Erdtman and colleagues. Morphology Orbicules are morphologically variable. Their size ranges from < 1 μm to 15 μm, but they are usually smaller than 1 μm. Within a single species, orbicule size may vary. There is also variation in the shape of orbicule; they can be spherical, irregular, doughnut-shaped, etc. The orbicular wall can be smooth or ornamented (e.g. with microgranules or microspines) and this ornamentation often shows a striking similarity with the exine ornamentation of the pollen grain. Orbicules are resistant to acetolysis and
https://en.wikipedia.org/wiki/Ty5%20retrotransposon
The Ty5 is a type of retrotransposon native to the Saccharomyces cerevisiae organism. The Saccharomyces cerevisiae retrotransposon Ty5 Ty5 is one of five endogenous retrotransposons native to the model organism Saccharomyces cerevisiae, all of which target integration to gene poor regions. Endogenous retrotransposons are hypothesized to target gene poor chromosomal targets in order to reduce the chance of inactivating host genes. Ty1-Ty4 integrate upstream of Pol III promoters, while Ty5 targets integration to loci bound in heterochromatin. In the case of Ty5, this likely occurs by means of an interaction between the C-terminus of integrase and a target protein. The tight targeting patterns seen for the Ty elements are thought to be a means to limit damage to its host, which has a very gene dense genome. Ty5 was discovered in the mid 1990s in the laboratory of Daniel Voytas at Iowa State University. Ty5 is used as a model system by which to understand the biology of the telomere and heterochromatin. The Ty5 retrotransposon is used as a genetic model to study the architecture and dynamics of the telomeres and heterochromatin. Yeast heterochromatin and Ty5. Heterochromatin in S. cerevisiae is composed of a wide array of proteins and plays several roles. The first stage of heterochromatin formation requires DNA binding proteins, which interact with specific cis DNA sequences at the telomeres, rDNA and HM loci. These proteins, including Rap1p and the origin recognition complex (ORC), serve as a platform for other proteins to bind, condense the DNA, and modify neighboring histones. Some of these proteins, notably Rap1p, also play other roles, including initiation of transcription. The first known step in the formation of dedicated heterochromatin is the binding of Sir4p to Rap1p (Luo, Vega-Palas et al. 2002). Sir4p is one of four ‘Silent Information Regulator’ proteins that also include Sir1p, Sir2p and Sir3p. Of these, Sir2p, Sir3p and Sir4p form the core of heteroc
https://en.wikipedia.org/wiki/PayScale
Payscale is an American compensation software and data company which helps employers manage employee compensation and employees understand their worth in the job market. History The website was launched on January 1, 2002. It was founded by Joe Giordano and John Gaffney. Mike Metzger served as CEO from 2004 to 2019. Scott Torrey, a 20-year veteran of SAP Concur, started as CEO on August 26, 2019 and stepped down on November 16, 2021. The current CEO of PayScale is Alex Hart. On April 24, 2014, Warburg Pincus acquired Payscale in a deal worth up to $100 million. On April 25, 2019, Francisco Partners announced a majority investment in Payscale at an enterprise value of $325 million. Overview Payscale was developed to help people and businesses obtain accurate, real-time information on job market compensation. While Payscale started by crowdsourcing compensation data from employees to power its products for employers, its Software as a Service offerings have evolved to allow businesses to utilize multiple compensation data sources, including Payscale's Crowdsourced and Company Sourced offerings as well as data from other providers. Customers can also manage their employee compensation strategy and structure within the platform and perform robust compensation analytics. For employees, the service works via the Internet by enabling individuals to submit their job profile and salary data, which is then compared to others like them. They receive a free report on their market worth. The company generates revenue by selling SaaS subscriptions, compensation data and services to employers, to aid in determining correct market rates for hiring, benchmarking and budgeting, and by targeted advertising to employees that visit its website. Payscale surveys its users' income and background, and since 2007, it has published an annual ranking of American colleges and universities by their estimated return on investment. The rankings have been popular with the public but cont
https://en.wikipedia.org/wiki/Okta
In meteorology, an okta is a unit of measurement used to describe the amount of cloud cover at any given location such as a weather station. Sky conditions are estimated in terms of how many eighths of the sky are covered in cloud, ranging from 0 oktas (completely clear sky) through to 8 oktas (completely overcast). In addition, in the SYNOP code there is an extra cloud cover indicator '9' indicating that the sky is totally obscured (i.e. hidden from view), usually due to dense fog or heavy snow. When used in weather charts, okta measurements are shown by means of graphic symbols (rather than numerals) contained within weather circles, to which are attached further symbols indicating other measured data such as wind speed and wind direction. Although relatively straightforward to measure (visually, for instance, by using a mirror), oktas only estimate cloud cover in terms of the area of the sky covered by clouds. They do not account for cloud type or thickness, and this limits their use for estimating cloud albedo or surface solar radiation receipt. Cloud oktas can also be measured using satellite imagery from geostationary satellites equipped with high-resolution image sensors such as Himawari-8. Similar to traditional approaches, satellite images do not account for cloud composition. Oktas are often referenced in aviation weather forecasts and low level forecasts: SKC = Sky clear (0 oktas); FEW = Few (1 to 2 oktas); SCT = Scattered (3 to 4 oktas); BKN = Broken (5 to 7 oktas); OVC = Overcast (8 oktas); NSC = nil significant cloud; CAVOK = ceiling and visibility okay. Hand-drawn maps In the early 20th century, it was common for weather maps to be hand drawn. The symbols for cloud cover on these maps, like the modern symbols, were drawn inside the circle marking the position of the weather station making the measurements. Unlike the modern symbols, these ones consisted of straight lines only rather than filled in blocks which would have been less practical
https://en.wikipedia.org/wiki/Warburg%20hypothesis
The Warburg hypothesis (), sometimes known as the Warburg theory of cancer, postulates that the driver of tumorigenesis is an insufficient cellular respiration caused by insult to mitochondria. The term Warburg effect in oncology describes the observation that cancer cells, and many cells grown in vitro, exhibit glucose fermentation even when enough oxygen is present to properly respire. In other words, instead of fully respiring in the presence of adequate oxygen, cancer cells ferment. The Warburg hypothesis was that the Warburg effect was the root cause of cancer. The current popular opinion is that cancer cells ferment glucose while keeping up the same level of respiration that was present before the process of carcinogenesis, and thus the Warburg effect would be defined as the observation that cancer cells exhibit glycolysis with lactate production and mitochondrial respiration even in the presence of oxygen. Hypothesis The hypothesis was postulated by the Nobel laureate Otto Heinrich Warburg in 1924. He hypothesized that cancer, malignant growth, and tumor growth are caused by the fact that tumor cells mainly generate energy (as e.g., adenosine triphosphate / ATP) by non-oxidative breakdown of glucose (a process called glycolysis). This is in contrast to healthy cells which mainly generate energy from oxidative breakdown of pyruvate. Pyruvate is an end-product of glycolysis, and is oxidized within the mitochondria. Hence, according to Warburg, carcinogenesis stems from the lowering of mitochondrial respiration. Warburg regarded the fundamental difference between normal and cancerous cells to be the ratio of glycolysis to respiration; this observation is also known as the Warburg effect. In the somatic mutation theory of cancer, malignant proliferation is caused by mutations and altered gene expression, in a process called malignant transformation, resulting in an uncontrolled growth of cells. The metabolic difference observed by Warburg adapts cancer cell
https://en.wikipedia.org/wiki/Basu%27s%20theorem
In statistics, Basu's theorem states that any boundedly complete minimal sufficient statistic is independent of any ancillary statistic. This is a 1955 result of Debabrata Basu. It is often used in statistics as a tool to prove independence of two statistics, by first demonstrating one is complete sufficient and the other is ancillary, then appealing to the theorem. An example of this is to show that the sample mean and sample variance of a normal distribution are independent statistics, which is done in the Example section below. This property (independence of sample mean and sample variance) characterizes normal distributions. Statement Let be a family of distributions on a measurable space and a statistic maps from to some measurable space . If is a boundedly complete sufficient statistic for , and is ancillary to , then conditional on , is independent of . That is, . Proof Let and be the marginal distributions of and respectively. Denote by the preimage of a set under the map . For any measurable set we have The distribution does not depend on because is ancillary. Likewise, does not depend on because is sufficient. Therefore Note the integrand (the function inside the integral) is a function of and not . Therefore, since is boundedly complete the function is zero for almost all values of and thus for almost all . Therefore, is independent of . Example Independence of sample mean and sample variance of a normal distribution Let X1, X2, ..., Xn be independent, identically distributed normal random variables with mean μ and variance σ2. Then with respect to the parameter μ, one can show that the sample mean, is a complete and sufficient statistic – it is all the information one can derive to estimate μ, and no more – and the sample variance, is an ancillary statistic – its distribution does not depend on μ. Therefore, from Basu's theorem it follows that these statistics are independent conditional on , conditional on .
https://en.wikipedia.org/wiki/Suns%20in%20alchemy
In alchemical and Hermetic traditions, suns () are used to symbolize a variety of concepts, much like the Sun in astrology. Suns can correspond to gold, citrinitas, generative masculine principles, imagery of "the king", or Apollo, the fiery spirit or sulfur, the divine spark in man, nobility, or incorruptibility. Recurring images of specific solar motifs can be found in the form of a "dark" or "black sun", or a green lion devouring the Sun. Sol niger Sol niger (black sun) can refer to the first stage of the alchemical magnum opus, the nigredo (blackness). In a text ascribed to Marsilio Ficino three suns are described: black, white, and red, corresponding to the three most used alchemical color stages. Of the sol niger he writes: The black sun is used to illuminate the dissolution of the body, a blackening of matter, or putrefaction in Splendor Solis, and Johann Daniel Mylius’s Philosophia Reformata. See also Alchemical symbol Classical planets in Western alchemy Solar symbol
https://en.wikipedia.org/wiki/Windows%20Error%20Reporting
Windows Error Reporting (WER) (codenamed Watson) is a crash reporting technology introduced by Microsoft with Windows XP and included in later Windows versions and Windows Mobile 5.0 and 6.0. Not to be confused with the Dr. Watson debugging tool which left the memory dump on the user's local machine, Windows Error Reporting collects and offers to send post-error debug information (a memory dump) using the Internet to Microsoft when an application crashes or stops responding on a user's desktop. No data is sent without the user's consent. When a crash dump (or other error signature information) reaches the Microsoft server, it is analyzed, and information about a solution is sent back to the user if available. Solutions are served using Windows Error Reporting Responses. Windows Error Reporting runs as a Windows service. Kinshuman Kinshumann is the original architect of WER. WER was also included in the Association for Computing Machinery (ACM) hall of fame for its impact on the computing industry. History Windows XP Microsoft first introduced Windows Error Reporting with Windows XP. Windows Vista Windows Error Reporting was improved significantly in Windows Vista, when public APIs were introduced for reporting failures other than application crashes and hangs. Using the new APIs, as documented on MSDN, developers can create custom reports and customize the reporting user interface. Windows Error Reporting was also revamped with a focus on reliability and user experience. For example, WER can now report errors even from processes in bad states such as stack exhaustions, PEB/TEB corruptions, and heap corruptions, conditions which in releases prior to Windows Vista would have resulted in silent program termination with no error report. A new Control Panel applet, "Problem Reports and Solutions" was also introduced, keeping a record of system and application errors and issues, as well as presenting probable solutions to problems. Windows 7 The Problem Reports and S
https://en.wikipedia.org/wiki/DeviceNet
DeviceNet is a network protocol used in the automation industry to interconnect control devices for data exchange. It utilizes the Common Industrial Protocol over a Controller Area Network media layer and defines an application layer to cover a range of device profiles. Typical applications include information exchange, safety devices, and large I/O control networks. History DeviceNet was originally developed by American company Allen-Bradley (now owned by Rockwell Automation). It is an application layer protocol on top of the CAN (Controller Area Network) technology, developed by Bosch. DeviceNet adapts the technology from the Common Industrial Protocol and takes advantage of CAN, making it low-cost and robust compared to the traditional RS-485 based protocols. In order to promote the use of DeviceNet worldwide, Rockwell Automation has adopted the "open" concept and decided to share the technology to third-party vendors. Hence it is now managed by ODVA, an independent organization located in North America. ODVA maintains specifications of DeviceNet and oversees advances to DeviceNet. In addition, ODVA ensures compliance to DeviceNet standards by providing conformance testing and vendor conformity. ODVA later decided to bring DeviceNet back to its predecessor's umbrella and collectively refer to the technology as the Common Industrial Protocol or CIP, which includes the following technologies: EtherNet/IP ControlNet DeviceNet ODVA claims high integrity between the three technologies due to the common protocol adaptation, which makes industrial controls much simpler compared to other technologies. DeviceNet has been standardized as IEC 62026-3. Architecture Technical Overview Define the OSI seven-layer architecture model the physical layer, data link layer and application layer Network in addition to the signal, but also including power, self-powered support network function (generally used in small devices, such as photo detectors, limit switches
https://en.wikipedia.org/wiki/Passive%20immunity
In immunology, passive immunity is the transfer of active humoral immunity of ready-made antibodies. Passive immunity can occur naturally, when maternal antibodies are transferred to the fetus through the placenta, and it can also be induced artificially, when high levels of antibodies specific to a pathogen or toxin (obtained from humans, horses, or other animals) are transferred to non-immune persons through blood products that contain antibodies, such as in immunoglobulin therapy or antiserum therapy. Passive immunization is used when there is a high risk of infection and insufficient time for the body to develop its own immune response, or to reduce the symptoms of ongoing or immunosuppressive diseases. Passive immunization can be provided when people cannot synthesize antibodies, and when they have been exposed to a disease that they do not have immunity against. Naturally acquired Maternal passive immunity Maternal passive immunity is a type of naturally acquired passive immunity, and refers to antibody-mediated immunity conveyed to a fetus or infant by its mother. Naturally acquired passive immunity can be provided during pregnancy, and through breastfeeding. In humans, maternal antibodies (MatAb) are passed through the placenta to the fetus by an FcRn receptor on placental cells. This occurs predominately during the third trimester of pregnancy, and thus is often reduced in babies born prematurely. Immunoglobulin G (IgG) is the only antibody isotype that can pass through the human placenta, and is the most common antibody of the five types of antibodies found in the body. IgG antibodies protects against bacterial and viral infections in fetuses. Immunization is often required shortly following birth to prevent diseases in newborns such as tuberculosis, hepatitis B, polio, and pertussis, however, maternal IgG can inhibit the induction of protective vaccine responses throughout the first year of life. This effect is usually overcome by secondary responses to
https://en.wikipedia.org/wiki/Bianchi%20classification
In mathematics, the Bianchi classification provides a list of all real 3-dimensional Lie algebras (up to isomorphism). The classification contains 11 classes, 9 of which contain a single Lie algebra and two of which contain a continuum-sized family of Lie algebras. (Sometimes two of the groups are included in the infinite families, giving 9 instead of 11 classes.) The classification is important in geometry and physics, because the associated Lie groups serve as symmetry groups of 3-dimensional Riemannian manifolds. It is named for Luigi Bianchi, who worked it out in 1898. The term "Bianchi classification" is also used for similar classifications in other dimensions and for classifications of complex Lie algebras. Classification in dimension less than 3 Dimension 0: The only Lie algebra is the abelian Lie algebra R0. Dimension 1: The only Lie algebra is the abelian Lie algebra R1, with outer automorphism group the multiplicative group of non-zero real numbers. Dimension 2: There are two Lie algebras: (1) The abelian Lie algebra R2, with outer automorphism group GL2(R). (2) The solvable Lie algebra of 2×2 upper triangular matrices of trace 0. It has trivial center and trivial outer automorphism group. The associated simply connected Lie group is the affine group of the line. Classification in dimension 3 All the 3-dimensional Lie algebras other than types VIII and IX can be constructed as a semidirect product of R2 and R, with R acting on R2 by some 2 by 2 matrix M. The different types correspond to different types of matrices M, as described below. Type I: This is the abelian and unimodular Lie algebra R3. The simply connected group has center R3 and outer automorphism group GL3(R). This is the case when M is 0. Type II: The Heisenberg algebra, which is nilpotent and unimodular. The simply connected group has center R and outer automorphism group GL2(R). This is the case when M is nilpotent but not 0 (eigenvalues all 0). Type III: This algebra is a
https://en.wikipedia.org/wiki/Synuclein
Synucleins are a family of soluble proteins common to vertebrates, primarily expressed in neural tissue and in certain tumors. The name is a blend of the words "synapse" and "nucleus", as it was first found in the synapses in the electromotor nucleus of the electric ray. Family members The synuclein family includes three known proteins: alpha-synuclein, beta-synuclein, and gamma-synuclein. Interest in the synuclein family began when alpha-synuclein was found to be mutated in several families with autosomal dominant Parkinson's disease. All synucleins have in common a highly conserved alpha-helical lipid-binding motif with similarity to the class-A2 lipid-binding domains of the exchangeable apolipoproteins. Synuclein family members are not found outside vertebrates, although they have some conserved structural similarity with plant 'late-embryo-abundant' proteins. Alpha-synuclein Beta-synuclein Gamma-synuclein Function Normal cellular functions have not been determined for any of the synuclein proteins. Some data suggest a role in the regulation of membrane stability and/or turnover. Mutations in alpha-synuclein are associated with early-onset familial Parkinson's disease and the protein aggregates abnormally in Parkinson's disease, Lewy body disease, and other neurodegenerative diseases. The gamma-synuclein protein's expression in breast tumors is a marker for tumor progression. Human proteins containing this domain SNCA; SNCB; SNCG;
https://en.wikipedia.org/wiki/Regular%20solution
In chemistry, a regular solution is a solution whose entropy of mixing is equal to that of an ideal solution with the same composition, but is non-ideal due to a nonzero enthalpy of mixing. Such a solution is formed by random mixing of components of similar molar volume and without strong specific interactions, and its behavior diverges from that of an ideal solution by showing phase separation at intermediate compositions and temperatures (a miscibility gap). Its entropy of mixing is equal to that of an ideal solution with the same composition, due to random mixing without strong specific interactions. For two components where is the gas constant, the total number of moles, and the mole fraction of each component. Only the enthalpy of mixing is non-zero, unlike for an ideal solution, while the volume of the solution equals the sum of volumes of components. Features A regular solution can also be described by Raoult's law modified with a Margules function with only one parameter : where the Margules function is Notice that the Margules function for each component contains the mole fraction of the other component. It can also be shown using the Gibbs-Duhem relation that if the first Margules expression holds, then the other one must have the same shape. A regular solutions internal energy will vary during mixing or during process. The value of can be interpreted as W/RT, where W = 2U12 - U11 - U22 represents the difference in interaction energy between like and unlike neighbors. In contrast to ideal solutions, regular solutions do possess a non-zero enthalpy of mixing, due to the W term. If the unlike interactions are more unfavorable than the like ones, we get competition between an entropy of mixing term that produces a minimum in the Gibbs free energy at x1 = 0.5 and the enthalpy term that has a maximum there. At high temperatures, the entropic term in the free energy of mixing dominates and the system is fully miscible, but at lower temperatures the G
https://en.wikipedia.org/wiki/Bernd%20Sturmfels
Bernd Sturmfels (born March 28, 1962 in Kassel, West Germany) is a Professor of Mathematics and Computer Science at the University of California, Berkeley and is a director of the Max Planck Institute for Mathematics in the Sciences in Leipzig since 2017. Education and career He received his PhD in 1987 from the University of Washington and the Technische Universität Darmstadt. After two postdoctoral years at the Institute for Mathematics and its Applications in Minneapolis, Minnesota, and the Research Institute for Symbolic Computation in Linz, Austria, he taught at Cornell University, before joining University of California, Berkeley in 1995. His Ph.D. students include Melody Chan, Jesús A. De Loera, Mike Develin, Diane Maclagan, Rekha R. Thomas, Caroline Uhler, and Cynthia Vinzant. Contributions Bernd Sturmfels has made contributions to a variety of areas of mathematics, including algebraic geometry, commutative algebra, discrete geometry, Gröbner bases, toric varieties, tropical geometry, algebraic statistics, and computational biology. He has written several highly cited papers in algebra with Dave Bayer. He has authored or co-authored multiple books including Introduction to tropical geometry with Diane Maclagan. Awards and honors Sturmfels' honors include a National Young Investigator Fellowship, an Alfred P. Sloan Fellowship, and a David and Lucile Packard Fellowship. In 1999 he received a Lester R. Ford Award for his expository article Polynomial equations and convex polytopes. He was awarded a Miller Research Professorship at the University of California Berkeley for 2000–2001. In 2018, he was awarded the George David Birkhoff Prize in Applied Mathematics. In 2012, he became a fellow of the American Mathematical Society.
https://en.wikipedia.org/wiki/Paradoxical%20set
In set theory, a paradoxical set is a set that has a paradoxical decomposition. A paradoxical decomposition of a set is two families of disjoint subsets, along with appropriate group actions that act on some universe (of which the set in question is a subset), such that each partition can be mapped back onto the entire set using only finitely many distinct functions (or compositions thereof) to accomplish the mapping. A set that admits such a paradoxical decomposition where the actions belong to a group is called -paradoxical or paradoxical with respect to . Paradoxical sets exist as a consequence of the Axiom of Infinity. Admitting infinite classes as sets is sufficient to allow paradoxical sets. Definition Suppose a group acts on a set . Then is -paradoxical if there exists some disjoint subsets and some group elements such that: and Examples Free group The Free group F on two generators a,b has the decomposition where e is the identity word and is the collection of all (reduced) words that start with the letter i. This is a paradoxical decomposition because Banach–Tarski paradox The most famous example of paradoxical sets is the Banach–Tarski paradox, which divides the sphere into paradoxical sets for the special orthogonal group. This result depends on the axiom of choice. See also Pathological (mathematics)