source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Mayo%E2%80%93Lewis%20equation | The Mayo–Lewis equation or copolymer equation in polymer chemistry describes the distribution of monomers in a copolymer. It was proposed by Frank R. Mayo and Frederick M. Lewis.
The equation considers a monomer mix of two components and and the four different reactions that can take place at the reactive chain end terminating in either monomer ( and ) with their reaction rate constants :
The reactivity ratio for each propagating chain end is defined as the ratio of the rate constant for addition of a monomer of the species already at the chain end to the rate constant for addition of the other monomer.
The copolymer equation is then:
with the concentrations of the components in square brackets. The equation gives the relative instantaneous rates of incorporation of the two monomers.
Equation derivation
Monomer 1 is consumed with reaction rate:
with the concentration of all the active chains terminating in monomer 1, summed over chain lengths. is defined similarly for monomer 2.
Likewise the rate of disappearance for monomer 2 is:
Division of both equations by followed by division of the first equation by the second yields:
The ratio of active center concentrations can be found using the steady state approximation, meaning that the concentration of each type of active center remains constant.
The rate of formation of active centers of monomer 1 () is equal to the rate of their destruction () so that
or
Substituting into the ratio of monomer consumption rates yields the Mayo–Lewis equation after rearrangement:
Mole fraction form
It is often useful to alter the copolymer equation by expressing concentrations in terms of mole fractions. Mole fractions of monomers and in the feed are defined as and where
Similarly, represents the mole fraction of each monomer in the copolymer:
These equations can be combined with the Mayo–Lewis equation to give
This equation gives the composition of copolymer formed at each instant. However the feed and copo |
https://en.wikipedia.org/wiki/Chebyshev%E2%80%93Markov%E2%80%93Stieltjes%20inequalities | In mathematical analysis, the Chebyshev–Markov–Stieltjes inequalities are inequalities related to the problem of moments that were formulated in the 1880s by Pafnuty Chebyshev and proved independently by Andrey Markov and (somewhat later) by Thomas Jan Stieltjes. Informally, they provide sharp bounds on a measure from above and from below in terms of its first moments.
Formulation
Given m0,...,m2m-1 ∈ R, consider the collection C of measures μ on R such that
for k = 0,1,...,2m − 1 (and in particular the integral is defined and finite).
Let P0,P1, ...,Pm be the first m + 1 orthogonal polynomials with respect to μ ∈ C, and let ξ1,...ξm be the zeros of Pm. It is not hard to see that the polynomials P0,P1, ...,Pm-1 and the numbers ξ1,...ξm are the same for every μ ∈ C, and therefore are determined uniquely by m0,...,m2m-1.
Denote
.
Theorem For j = 1,2,...,m, and any μ ∈ C, |
https://en.wikipedia.org/wiki/Randomness%20test | A randomness test (or test for randomness), in data evaluation, is a test used to analyze the distribution of a set of data to see whether it can be described as random (patternless). In stochastic modeling, as in some computer simulations, the hoped-for randomness of potential input data can be verified, by a formal test for randomness, to show that the data are valid for use in simulation runs. In some cases, data reveals an obvious non-random pattern, as with so-called "runs in the data" (such as expecting random 0–9 but finding "4 3 2 1 0 4 3 2 1..." and rarely going above 4). If a selected set of data fails the tests, then parameters can be changed or other randomized data can be used which does pass the tests for randomness.
Background
The issue of randomness is an important philosophical and theoretical question. Tests for randomness can be used to determine whether a data set has a recognisable pattern, which would indicate that the process that generated it is significantly non-random. For the most part, statistical analysis has, in practice, been much more concerned with finding regularities in data as opposed to testing for randomness. Many "random number generators" in use today are defined by algorithms, and so are actually pseudo-random number generators. The sequences they produce are called pseudo-random sequences. These generators do not always generate sequences which are sufficiently random, but instead can produce sequences which contain patterns. For example, the infamous RANDU routine fails many randomness tests dramatically, including the spectral test.
Stephen Wolfram used randomness tests on the output of Rule 30 to examine its potential for generating random numbers, though it was shown to have an effective key size far smaller than its actual size and to perform poorly on a chi-squared test. The use of an ill-conceived random number generator can put the validity of an experiment in doubt by violating statistical assumptions. Though |
https://en.wikipedia.org/wiki/Nucleic%20acid%20thermodynamics | Nucleic acid thermodynamics is the study of how temperature affects the nucleic acid structure of double-stranded DNA (dsDNA). The melting temperature (Tm) is defined as the temperature at which half of the DNA strands are in the random coil or single-stranded (ssDNA) state. Tm depends on the length of the DNA molecule and its specific nucleotide sequence. DNA, when in a state where its two strands are dissociated (i.e., the dsDNA molecule exists as two independent strands), is referred to as having been denatured by the high temperature.
Concepts
Hybridization
Hybridization is the process of establishing a non-covalent, sequence-specific interaction between two or more complementary strands of nucleic acids into a single complex, which in the case of two strands is referred to as a duplex. Oligonucleotides, DNA, or RNA will bind to their complement under normal conditions, so two perfectly complementary strands will bind to each other readily. In order to reduce the diversity and obtain the most energetically preferred complexes, a technique called annealing is used in laboratory practice. However, due to the different molecular geometries of the nucleotides, a single inconsistency between the two strands will make binding between them less energetically favorable. Measuring the effects of base incompatibility by quantifying the temperature at which two strands anneal can provide information as to the similarity in base sequence between the two strands being annealed. The complexes may be dissociated by thermal denaturation, also referred to as melting. In the absence of external negative factors, the processes of hybridization and melting may be repeated in succession indefinitely, which lays the ground for polymerase chain reaction. Most commonly, the pairs of nucleic bases A=T and G≡C are formed, of which the latter is more stable.
Denaturation
DNA denaturation, also called DNA melting, is the process by which double-stranded deoxyribonucleic acid unwinds an |
https://en.wikipedia.org/wiki/Conidiobolomycosis | Conidiobolomycosis is a rare long-term fungal infection that is typically found just under the skin of the nose, sinuses, cheeks and upper lips. It may present with a nose bleed or a blocked or runny nose. Typically there is a firm painless swelling which can slowly extend to the nasal bridge and eyes, sometimes causing facial disfigurement.
Most cases are caused by Conidiobolus coronatus, a fungus found in soil and in the environment in general, which can infect healthy people. It is usually acquired by inhaling the spores of the fungus, but can be by direct infection through a cut in the skin such as an insect bite.
The extent of disease may be seen using medical imaging such as CT scanning of the nose and sinus. Diagnosis may be confirmed by biopsy, microscopy, culture and histopathology. Treatment is with long courses of antifungals and sometimes cutting out infected tissue. The condition has a good response to antifungal treatment, but can recur. The infection is rarely fatal.
The condition occurs more frequently in adults working or living in the tropical forests of South and Central America, West Africa and Southeast Asia. Males are affected more than females. The first case in a human was described in Jamaica in 1965.
Signs and symptoms
The infection presents with firm lumps just under the skin of the nose, sinuses, upper lips, mouth and cheeks. The swelling is painless and may feel "woody". Sinus pain may occur. Infection may extend to involve the nasal bridge, face and eyes, sometimes resulting in facial disfigurement. The nose may feel blocked or have a discharge, and may bleed.
Cause
Conidiobolomycosis is a type of Entomophthoromycosis, the other being basidiobolomycosis, and is caused by mainly Conidiobolus coronatus, but also Conidiobolus incongruus and Conidiobolus lamprauges
Mechanism
Conidiobolomycosis chiefly affects the central face, usually beginning in the nose before extending onto paranasal sinuses, cheeks, upper lip and pharynx. The dis |
https://en.wikipedia.org/wiki/Wax%20thermostatic%20element | The wax thermostatic element was invented in 1934 by Sergius Vernet (1899–1968). Its principal application is in automotive thermostats used in the engine cooling system. The first applications in the plumbing and heating industries were in Sweden (1970) and in Switzerland (1971).
Wax thermostatic elements transform heat energy into mechanical energy using the thermal expansion of waxes when they melt. This wax motor principle also finds applications besides engine cooling systems, including heating system thermostatic radiator valves, plumbing, industrial, and agriculture.
Automotive thermostats
The internal combustion engine cooling thermostat maintains the temperature of the engine near its optimum operating temperature by regulating the flow of coolant to an air cooled radiator. This regulation is now carried out by an internal thermostat. Conveniently, both the sensing element of the thermostat and its control valve may be placed at the same location, allowing the use of a simple self-contained non-powered thermostat as the primary device for the precise control of engine temperature. Although most vehicles now have a temperature-controlled electric cooling fan, "the unassisted air stream can provide sufficient cooling up to 95% of the time" and so such a fan is not the mechanism for primary control of the internal temperature.
Research in the 1920s showed that cylinder wear was aggravated by condensation of fuel when it contacted a cool cylinder wall which removed the oil film. The development of the automatic thermostat in the 1930s solved this problem by ensuring fast engine warm-up.
The first thermostats used a sealed capsule of an organic liquid with a boiling point just below the desired opening temperature. These capsules were made in the form of a cylindrical bellows. As the liquid boiled inside the capsule, the capsule bellows expanded, opening a sheet brass plug valve within the thermostat. As these thermostats could fail in service, they were des |
https://en.wikipedia.org/wiki/Pointclass | In the mathematical field of descriptive set theory, a pointclass is a collection of sets of points, where a point is ordinarily understood to be an element of some perfect Polish space. In practice, a pointclass is usually characterized by some sort of definability property; for example, the collection of all open sets in some fixed collection of Polish spaces is a pointclass. (An open set may be seen as in some sense definable because it cannot be a purely arbitrary collection of points; for any point in the set, all points sufficiently close to that point must also be in the set.)
Pointclasses find application in formulating many important principles and theorems from set theory and real analysis. Strong set-theoretic principles may be stated in terms of the determinacy of various pointclasses, which in turn implies that sets in those pointclasses (or sometimes larger ones) have regularity properties such as Lebesgue measurability (and indeed universal measurability), the property of Baire, and the perfect set property.
Basic framework
In practice, descriptive set theorists often simplify matters by working in a fixed Polish space such as Baire space or sometimes Cantor space, each of which has the advantage of being zero dimensional, and indeed homeomorphic to its finite or countable powers, so that considerations of dimensionality never arise. Yiannis Moschovakis provides greater generality by fixing once and for all a collection of underlying Polish spaces, including the set of all naturals, the set of all reals, Baire space, and Cantor space, and otherwise allowing the reader to throw in any desired perfect Polish space. Then he defines a product space to be any finite Cartesian product of these underlying spaces. Then, for example, the pointclass of all open sets means the collection of all open subsets of one of these product spaces. This approach prevents from being a proper class, while avoiding excessive specificity as to the particular Polish space |
https://en.wikipedia.org/wiki/Mutilated%20chessboard%20problem | The mutilated chessboard problem is a tiling puzzle posed by Max Black in 1946 that asks:
Suppose a standard 8×8 chessboard (or checkerboard) has two diagonally opposite corners removed, leaving 62 squares. Is it possible to place 31 dominoes of size 2×1 so as to cover all of these squares?
It is an impossible puzzle: there is no domino tiling meeting these conditions. One proof of its impossibility uses the fact that, with the corners removed, the chessboard has 32 squares of one color and 30 of the other, but each domino must cover equally many squares of each color. More generally, if any two squares are removed from the chessboard, the rest can be tiled by dominoes if and only if the removed squares are of different colors. This problem has been used as a test case for automated reasoning, creativity, and the philosophy of mathematics.
History
The mutilated chessboard problem is an instance of domino tiling of grids and polyominoes, also known as "dimer models", a general class of problems whose study in statistical mechanics dates to the work of Ralph H. Fowler and George Stanley Rushbrooke in 1937. Domino tilings also have a long history of practical use in pavement design and the arrangement of tatami flooring.
The mutilated chessboard problem itself was proposed by philosopher Max Black in his book Critical Thinking (1946), with a hint at the coloring-based solution to its impossibility. It was popularized in the 1950s through later discussions by Solomon W. Golomb (1954), George Gamow and Marvin Stern (1958), Claude Berge (1958), and Martin Gardner in his Scientific American column "Mathematical Games" (1957).
The use of the mutilated chessboard problem in automated reasoning stems from a proposal for its use by John McCarthy in 1964. It has also been studied in cognitive science as a test case for creative insight, Black's original motivation for the problem. In the philosophy of mathematics, it has been examined in studies of the nature of mathemati |
https://en.wikipedia.org/wiki/Coset%20construction | In mathematics, the coset construction (or GKO construction) is a method of constructing unitary highest weight representations of the Virasoro algebra, introduced by Peter Goddard, Adrian Kent and David Olive (1986). The construction produces the complete discrete series of highest weight representations of the Virasoro algebra and demonstrates their unitarity, thus establishing the classification of unitary highest weight representations. |
https://en.wikipedia.org/wiki/Multilevel%20queue | Multi-level queueing, used at least since the late 1950s/early 1960s, is a queue with a predefined number of levels. Items get assigned to a particular level at insert (using some predefined algorithm), and thus cannot be moved to another level (unlike in the multilevel feedback queue). Items get removed from the queue by removing all items from a level, and then moving to the next. If an item is added to a level above, the "fetching" restarts from there. Each level of the queue is free to use its own scheduling, thus adding greater flexibility than merely having multiple levels in a queue.
Process Scheduling
Multi-level queue scheduling algorithm is used in scenarios where the processes can be classified into groups based on property like process type, CPU time, IO access, memory size, etc. One general classification of the processes is foreground processes and background processes. In a multi-level queue scheduling algorithm, there will be 'n' number of queues, where 'n' is the number of groups the processes are classified into. Each queue will be assigned a priority and will have its own scheduling algorithm like Round-robin scheduling or FCFS. For the process in a queue to execute, all the queues of priority higher than it should be empty, meaning the process in those high priority queues should have completed its execution. In this scheduling algorithm, once assigned to a queue, the process will not move to any other queues.
Consider the following table with the arrival time, execute time and type of the process (foreground or background - where foreground processes are given high priority) to understand non pre-emptive and pre-emptive multilevel scheduling in depth with FCFS algorithm for both the queues:
See also
Fair-share scheduling
Lottery scheduling |
https://en.wikipedia.org/wiki/Palatine%20raphe | The palatine raphe (or median raphe or median palatine raphe) is a raphe running across the palate, from the palatine uvula to the incisive papilla.
External links
Diagram at bris.ac.uk
Diagram at ana.bris.ac.uk
Diagram at waybuilder.net
Mouth |
https://en.wikipedia.org/wiki/Closest%20pair%20of%20points%20problem | The closest pair of points problem or closest pair problem is a problem of computational geometry: given points in metric space, find a pair of points with the smallest distance between them. The closest pair problem for points in the Euclidean plane was among the first geometric problems that were treated at the origins of the systematic study of the computational complexity of geometric algorithms.
Time bounds
Randomized algorithms that solve the problem in linear time are known, in Euclidean spaces whose dimension is treated as a constant for the purposes of asymptotic analysis. This is significantly faster than the time (expressed here in big O notation) that would be obtained by a naive algorithm of finding distances between all pairs of points and selecting the smallest.
It is also possible to solve the problem without randomization, in random-access machine models of computation with unlimited memory that allow the use of the floor function, in near-linear time. In even more restricted models of computation, such as the algebraic decision tree, the problem can be solved in the somewhat slower time bound, and this is optimal for this model, by a reduction from the element uniqueness problem. Both sweep line algorithms and divide-and-conquer algorithms with this slower time bound are commonly taught as examples of these algorithm design techniques.
Linear-time randomized algorithms
A linear expected time randomized algorithm of , modified slightly by Richard Lipton to make its analysis easier, proceeds as follows, on an input set consisting of points in a -dimensional Euclidean space:
Select pairs of points uniformly at random, with replacement, and let be the minimum distance of the selected pairs.
Round the input points to a square grid of points whose size (the separation between adjacent grid points) is , and use a hash table to collect together pairs of input points that round to the same grid point.
For each input point, compute the distance |
https://en.wikipedia.org/wiki/Gabriel%20graph | In mathematics and computational geometry, the Gabriel graph of a set of points in the Euclidean plane expresses one notion of proximity or nearness of those points. Formally, it is the graph with vertex set in which any two distinct points and are adjacent precisely when the closed disc having as a diameter contains no other points. Another way of expressing the same adjacency criterion is that and should be the two closest given points to their midpoint, with no other given point being as close. Gabriel graphs naturally generalize to higher dimensions, with the empty disks replaced by empty closed balls. Gabriel graphs are named after K. Ruben Gabriel, who introduced them in a paper with Robert R. Sokal in 1969.
Percolation
For Gabriel graphs of infinite random point sets, the finite site percolation threshold gives the fraction of points needed to support connectivity: if a random subset of fewer vertices than the threshold is given, the remaining graph will almost surely have only finite connected components, while if the size of the random subset is more than the threshold, then the remaining graph will almost surely have an infinite component (as well as finite components). This threshold was proved to exist by , and more precise values of both site and bond thresholds have been given by Norrenbrock.
Related geometric graphs
The Gabriel graph is a subgraph of the Delaunay triangulation. It can be found in linear time if the Delaunay triangulation is given.
The Gabriel graph contains, as subgraphs, the Euclidean minimum spanning tree, the relative neighborhood graph, and the nearest neighbor graph.
It is an instance of a beta-skeleton. Like beta-skeletons, and unlike Delaunay triangulations, it is not a geometric spanner: for some point sets, distances within the Gabriel graph can be much larger than the Euclidean distances between points. |
https://en.wikipedia.org/wiki/Element%20distinctness%20problem | In computational complexity theory, the element distinctness problem or element uniqueness problem is the problem of determining whether all the elements of a list are distinct.
It is a well studied problem in many different models of computation. The problem may be solved by sorting the list and then checking if there are any consecutive equal elements; it may also be solved in linear expected time by a randomized algorithm that inserts each item into a hash table and compares only those elements that are placed in the same hash table cell.
Several lower bounds in computational complexity are proved by reducing the element distinctness problem to the problem in question, i.e., by demonstrating that the solution of the element uniqueness problem may be quickly found after solving the problem in question.
Decision tree complexity
The number of comparisons needed to solve the problem of size , in a comparison-based model of computation such as a decision tree or algebraic decision tree, is . Here, invokes big theta notation, meaning that the problem can be solved in a number of comparisons proportional to (a linearithmic function) and that all solutions require this many comparisons. In these models of computation, the input numbers may not be used to index the computer's memory (as in the hash table solution) but may only be accessed by computing and comparing simple algebraic functions of their values. For these models, an algorithm based on comparison sort solves the problem within a constant factor of the best possible number of comparisons. The same lower bound applies as well to the expected number of comparisons in the randomized algebraic decision tree model.
Real RAM Complexity
If the elements in the problem are real numbers, the decision-tree lower bound extends to the real random-access machine model with an instruction set that includes addition, subtraction and multiplication of real numbers, as well as comparison and either division or remaindering |
https://en.wikipedia.org/wiki/Rhinosinusitis | Rhinosinusitis is a simultaneous infection of the nasal mucosa (rhinitis) and an infection of the mucosa of the paranasal sinuses (sinusitis). A distinction is made between acute rhinosinusitis and chronic rhinosinusitis.
Background
Because sinusitis typically is preceded by an infection of the nasal mucosa, some authors suggest generally replacing the term “sinusitis” with “rhinosinusitis”. The functional unity of the two mucosa speaks in favor of this replacement. A distinction is made between acute and chronic rhinosinusitis. Acute sinusitis lasts a maximum of 12 weeks. The clinical symptoms of acute rhinosinusitis are purulent nasal secretion, nasal obstruction and/or tension headache or feeling of fullness in the facial area. Acute rhinosinusitis can be caused by a viral or bacterial infection – a distinction is not possible during the first days. If the clinical picture follows a two-stage development, it indicates a bacterial rhinosinusitis. Chronic rhinosinusitis lasts more than 12 weeks with no complete recovery. The symptoms of chronic rhinosinusitis are less prominent/pronounced than of acute rhinosinusitis. Chronic rhinosinusitis is characterized/shaped by an impaired nasal inspiration, feelings of pressure and swelling in the facial area, as well as a higher susceptibility to infection.
Severe complications are rare, although orbital and intracranial inflammations can occur.
Therapy
Inhalation therapy mechanically removes deposits and relieves the symptoms of allergic or inflammatory diseases like acute or chronic rhinosinusitis (CRS). In essence, inhalation therapy resolves the obstruction found to be bothersome, alleviates the irritation of the nasal mucosa and supports the self-cleaning mechanisms. Inhalation therapy is commonly mentioned in North American and international guidelines for treatment of CRS (Bachmann et al., 2000). There are different therapeutic approaches for acute rhinosinusitis. Among other things, pain killers, decongestant n |
https://en.wikipedia.org/wiki/NASBA%20%28molecular%20biology%29 | Nucleic acid sequence-based amplification, commonly referred to as NASBA, is a method in molecular biology which is used to produce multiple copies of single stranded RNA. NASBA is a two-step process that takes RNA and anneals specially designed primers, then utilizes an enzyme cocktail to amplify it.
Background
Nucleic acid amplification is a technique used to produce several copies of a specific segment of RNA/DNA. Amplified RNA and DNA can be used for a variety of applications, such as genotyping, sequencing, and detection of bacteria or viruses. There are two different types of amplification, non-isothermal and isothermal. Non-isothermal amplification produces multiple copies of RNA/DNA through reiterative cycling between different temperatures. Isothermal amplification produces multiple copies of RNA/DNA at a constant reaction temperature. NASBA takes single stranded RNA, anneals primers to it at 65°C, and then amplifies it at 41°C to produce multiple copies of single stranded RNA. In order for successful amplification to occur, an enzyme cocktail containing, Avian Myeloblastosis Reverse Transcriptase (AMV-RT), RNase H, and RNA polymerase is used. AMV-RT synthesizes a complementary DNA strand (cDNA) from the RNA template once the primer is annealed. RNase H then degrades the RNA template and the other primer binds to the cDNA to form double stranded DNA, which RNA polymerase uses to synthesize copies of RNA. One key aspect of NASBA is that the starting material and end product is always single stranded RNA. That being said, it can be used to amplify DNA, but the DNA must be translated into RNA in order for successful amplification to occur.
Loop-mediated isothermal amplification (LAMP) is another isothermal amplification technique.
History
NASBA was developed by J Compton in 1991, who defined it as "a primer-dependent technology that can be used for the continuous amplification of nucleic acids in a single mixture at one temperature". Immediately after the |
https://en.wikipedia.org/wiki/Paul%20Tholey | Paul Tholey (14 March 1937 – 7 December 1998) was a German Gestalt psychologist, and a professor of psychology and sports science at the University of Frankfurt and the Technical University of Braunschweig.
Tholey started the study of oneirology in an attempt to prove that dreams occur in color. Given the unreliability of dream memories and following the critical realism approach, he used lucid dreaming as an epistemological tool for investigating dreams, in a similar fashion to Stephen LaBerge. He devised the reflection technique for inducing lucid dreams, consisting in continuously suspecting waking life to be a dream, in the hope that such a habit would manifest itself during dreams.
Tholey's research included the examination of the cognitive abilities of dreamers, as well as the cognitive abilities of dream figures. In the latter study, nine trained lucid dreamers were directed to set other dream figures arithmetic and verbal tasks during lucid dreaming (Cognitive abilities of dream figures in lucid dreams, 1983). Dream figures who agreed to perform the tasks proved more successful in verbal than in arithmetic tasks.
Bibliography
Techniques for inducing and manipulating lucid dreams. Perceptual and Motor Skills, 57, 1983, pp 79–90.
Relation between dream content and eye movements tested by lucid dreams. Perceptual and Motor Skills, 56, 1983, pp 875–878.
Cognitive abilities of dream figures in lucid dreams. Lucidity Letter, 71, 1983.
Overview of the Development of Lucid Dream Research in Germany . Lecture at the VI. International Conference of the Association for the Study of Dreams in London 1989. First published in: Lucidity Letter, 8(2) (1989), pp 1–30.
Conversation Between Stephen LaBerge and Tholey in July 1989. B. Holzinger (ed.). Lucidity, 10(1&2), 1991, pp 62–71.
A complete bibliography of articles in German , some of which have been translated into French, English, Hungarian.
Gestalttheorie von Sport, Klartraum und Bewusstsein. Ausgewählte Arbeiten |
https://en.wikipedia.org/wiki/Tinbergen%27s%20four%20questions | Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular:
behavioural adaptive functions
phylogenetic history; and the proximate explanations
underlying physiological mechanisms
ontogenetic/developmental history.
Four categories of questions and explanations
When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny).
This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem.
Evolutionary (ultimate) explanations
First question: Function (adaptation)
Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive.
The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function |
https://en.wikipedia.org/wiki/Amanita%20velosa | Amanita velosa, commonly known as the springtime amanita, or bittersweet orange ringless amanita is an edible species of agaric found in California, as well as southern Oregon and Baja California.
Description and classification
It is part of Amanita section Vaginatae, and like other species in this group, it is characterized by its lack of an annulus, striate pileus margin, thick universal veil remnants comprising the veil, volva, and pileus patches, inamyloid spores, and lack of characteristic Amanita toxins such as amatoxins and ibotenic acid. It is distinguished from other species in section Vaginatae by its lack of any kind of umbo on its pileus, its short pileus striae, and its distinct pale orange to pale salmon coloration when young. Its coloration can become more brownish with age and entirely white specimens are occasionally seen as well. Like many other Amanita, the gills are white, but occasionally have a distinct pinkish or orangish tint. In older specimens, the odor can become pungent and fishy.
The cap is 5–15 cm wide, convex then plane, with an orange-pink or salmon-like color; it usually has a white universal veil patch. The gills are adnexed to free, close and white (or pinkish with age). The stalk is 5–15 cm long, and 1–3 cm wide. The volva is white, saclike and sheathes the stalk base. The spores are white, smooth, elliptical, and inamyloid.
Habitat and range
Amanita velosa is a late-season mushroom in its range of occurrence, being found from midwinter into spring, up until the end of the California rainy season. Its favored habitat is the ecotone between oak (particularly coast live oak) woodlands and open grassland, living in an ectomycorrhizal relationship with young oak trees.
Although this species is primarily known from the coastal regions of California, Oregon, and Baja California, it is also reported to have been found in association with aspen and conifers in the Sierra Nevada and there is also one report of this species being found |
https://en.wikipedia.org/wiki/Fractal%20sequence | In mathematics, a fractal sequence is one that contains itself as a proper subsequence. An example is
1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, ...
If the first occurrence of each n is deleted, the remaining sequence is identical to the original. The process can be repeated indefinitely, so that actually, the original sequence contains not only one copy of itself, but rather, infinitely many.
Definition
The precise definition of fractal sequence depends on a preliminary definition: a sequence x = (xn) is an infinitive sequence if for every i,
(F1) xn = i for infinitely many n.
Let a(i,j) be the jth index n for which xn = i. An infinitive sequence x is a fractal sequence if two additional conditions hold:
(F2) if i+1 = xn, then there exists m < n such that
(F3) if h < i then for every j there is exactly one k such that
According to (F2), the first occurrence of each i > 1 in x must be preceded at least once by each of the numbers 1, 2, ..., i-1, and according to (F3), between consecutive occurrences of i in x, each h less than i occurs exactly once.
Example
Suppose θ is a positive irrational number. Let
S(θ) = the set of numbers c + dθ, where c and d are positive integers
and let
cn(θ) + θdn(θ)
be the sequence obtained by arranging the numbers in S(θ) in increasing order. The sequence cn(θ) is the signature of θ, and it is a fractal sequence.
For example, the signature of the golden ratio (i.e., θ = (1 + sqrt(5))/2) begins with
1, 2, 1, 3, 2, 4, 1, 3, 5, 2, 4, 1, 6, 3, 5, 2, 7, 4, 1, 6, 3, 8, 5, ...
and the signature of 1/θ = θ - 1 begins with
1, 1, 2, 1, 2, 1, 3, 2, 1, 3, 2, 4, 1, 3, 2, 4, 1, 3, 2, 4, 1, 3, 5, ...
These are sequences and in the On-Line Encyclopedia of Integer Sequences, where further examples from a variety of number-theoretic and combinatorial settings are given.
See also
Thue-Morse Sequence
External links
On-Line Encyclopedia of Integer Sequences: |
https://en.wikipedia.org/wiki/Amanita%20ocreata | Amanita ocreata, commonly known as the death angel, destroying angel, angel of death or more precisely western North American destroying angel, is a deadly poisonous basidiomycete fungus, one of many in the genus Amanita. Occurring in the Pacific Northwest and California floristic provinces of North America, A. ocreata associates with oak trees. The large fruiting bodies (the mushrooms) generally appear in spring; the cap may be white or ochre and often develops a brownish centre, while the stipe, ring, gill and volva are all white.
Amanita ocreata resemble several edible species commonly consumed by humans, increasing the risk of accidental poisoning. Mature fruiting bodies can be confused with the edible A. velosa (springtime amanita), A. lanei or Volvopluteus gloiocephalus, while immature specimens may be difficult to distinguish from edible Agaricus mushrooms or puffballs. Similar in toxicity to the death cap (A. phalloides) and destroying angels of Europe (A. virosa) and eastern North America (A. bisporigera), it is a potentially deadly fungus responsible for several poisonings in California. Its principal toxic constituent, α-Amanitin, damages the liver and kidneys, often fatally, and has no known antidote, though silybin and N-acetylcysteine show promise. The initial symptoms are gastrointestinal and include abdominal pain, diarrhea and vomiting. These subside temporarily after 2–3 days, though ongoing damage to internal organs during this time is common; symptoms of jaundice, diarrhea, delirium, seizures, and coma may follow with death from liver failure 6–16 days post ingestion.
Taxonomy and naming
Amanita ocreata was first described by American mycologist Charles Horton Peck in 1909 from material collected by Charles Fuller Baker in Claremont, California. The specific epithet is derived from the Latin ocrěātus 'wearing greaves' from ocrea 'greave', referring to its loose, baggy volva. Amanita bivolvata is a botanical synonym. The mushroom belongs to the |
https://en.wikipedia.org/wiki/Parametricity | In programming language theory, parametricity is an abstract uniformity property enjoyed by parametrically polymorphic functions, which captures the intuition that all instances of a polymorphic function act the same way.
Idea
Consider this example, based on a set X and the type T(X) = [X → X] of functions from X to itself. The higher-order function twiceX : T(X) → T(X) given by twiceX(f) = f ∘ f, is intuitively independent of the set X. The family of all such functions twiceX, parametrized by sets X, is called a "parametrically polymorphic function". We simply write twice for the entire family of these functions and write its type as X. T(X) → T(X). The individual functions twiceX are called the components or instances of the polymorphic function. Notice that all the component functions twiceX act "the same way" because they are given by the same rule. Other families of functions obtained by picking one arbitrary function from each T(X) → T(X) would not have such uniformity. They are called "ad hoc polymorphic functions". Parametricity is the abstract property enjoyed by the uniformly acting families such as twice, which distinguishes them from ad hoc families. With an adequate formalization of parametricity, it is possible to prove that the parametrically polymorphic functions of type X. T(X) → T(X) are one-to-one with natural numbers. The function corresponding to the natural number n is given by the rule f fn, i.e., the polymorphic Church numeral for n. In contrast, the collection of all ad hoc families would be too large to be a set.
History
The parametricity theorem was originally stated by John C. Reynolds, who called it the abstraction theorem. In his paper "Theorems for free!", Philip Wadler described an application of parametricity to derive theorems about parametrically polymorphic functions based on their types.
Programming language implementation
Parametricity is the basis for many program transformations implemented in compilers fo |
https://en.wikipedia.org/wiki/Markov%20brothers%27%20inequality | In mathematics, the Markov brothers' inequality is an inequality proved in the 1890s by brothers Andrey Markov and Vladimir Markov, two Russian mathematicians. This inequality bounds the maximum of the derivatives of a polynomial on an interval in terms of the maximum of the polynomial. For k = 1 it was proved by Andrey Markov, and for k = 2,3,... by his brother Vladimir Markov.
The statement
Let P be a polynomial of degree ≤ n. Then for all nonnegative integers
Equality is attained for Chebyshev polynomials of the first kind.
Related inequalities
Bernstein's inequality (mathematical analysis)
Remez inequality
Applications
Markov's inequality is used to obtain lower bounds in computational complexity theory via the so-called "Polynomial Method". |
https://en.wikipedia.org/wiki/Elementary%20proof | In mathematics, an elementary proof is a mathematical proof that only uses basic techniques. More specifically, the term is used in number theory to refer to proofs that make no use of complex analysis. Historically, it was once thought that certain theorems, like the prime number theorem, could only be proved by invoking "higher" mathematical theorems or techniques. However, as time progresses, many of these results have also been subsequently reproven using only elementary techniques.
While there is generally no consensus as to what counts as elementary, the term is nevertheless a common part of the mathematical jargon. An elementary proof is not necessarily simple, in the sense of being easy to understand or trivial. In fact, some elementary proofs can be quite complicated — and this is especially true when a statement of notable importance is involved.
Prime number theorem
The distinction between elementary and non-elementary proofs has been considered especially important in regard to the prime number theorem. This theorem was first proved in 1896 by Jacques Hadamard and Charles Jean de la Vallée-Poussin using complex analysis. Many mathematicians then attempted to construct elementary proofs of the theorem, without success. G. H. Hardy expressed strong reservations; he considered that the essential "depth" of the result ruled out elementary proofs:
However, in 1948, Atle Selberg produced new methods which led him and Paul Erdős to find elementary proofs of the prime number theorem.
A possible formalization of the notion of "elementary" in connection to a proof of a number-theoretical result is the restriction that the proof can be carried out in Peano arithmetic. Also in that sense, these proofs are elementary.
Friedman's conjecture
Harvey Friedman conjectured, "Every theorem published in the Annals of Mathematics whose statement involves only finitary mathematical objects (i.e., what logicians call an arithmetical statement) can be proved in elementar |
https://en.wikipedia.org/wiki/FloraBase | FloraBase is a public access web-based database of the flora of Western Australia. It provides authoritative scientific information on 12,978 taxa, including descriptions, maps, images, conservation status and nomenclatural details. 1,272 alien taxa (naturalised weeds) are also recorded.
The system takes data from datasets including the Census of Western Australian Plants and the Western Australian Herbarium specimen database of more than 803,000 vouchered plant collections. It is operated by the Western Australian Herbarium within the Department of Parks and Wildlife. It was established in November 1998.
In its distribution guide it uses a combination of IBRA version 5.1 and John Stanley Beard's botanical provinces.
See also
Declared Rare and Priority Flora List
For other online flora databases see List of electronic Floras. |
https://en.wikipedia.org/wiki/Carleman%27s%20inequality | Carleman's inequality is an inequality in mathematics, named after Torsten Carleman, who proved it in 1923 and used it to prove the Denjoy–Carleman theorem on quasi-analytic classes.
Statement
Let be a sequence of non-negative real numbers, then
The constant (euler number) in the inequality is optimal, that is, the inequality does not always hold if is replaced by a smaller number. The inequality is strict (it holds with "<" instead of "≤") if some element in the sequence is non-zero.
Integral version
Carleman's inequality has an integral version, which states that
for any f ≥ 0.
Carleson's inequality
A generalisation, due to Lennart Carleson, states the following:
for any convex function g with g(0) = 0, and for any -1 < p < ∞,
Carleman's inequality follows from the case p = 0.
Proof
An elementary proof is sketched below. From the inequality of arithmetic and geometric means applied to the numbers
where MG stands for geometric mean, and MA — for arithmetic mean. The Stirling-type inequality applied to implies
for all
Therefore,
whence
proving the inequality. Moreover, the inequality of arithmetic and geometric means of non-negative numbers is known to be an equality if and only if all the numbers coincide, that is, in the present case, if and only if for . As a consequence, Carleman's inequality is never an equality for a convergent series, unless all vanish, just because the harmonic series is divergent.
One can also prove Carleman's inequality by starting with Hardy's inequality
for the non-negative numbers a1,a2,... and p > 1, replacing each an with a, and letting p → ∞.
Versions for specific sequences
Christian Axler and Mehdi Hassani investigated Carleman's inequality for the specific cases of where is the th prime number. They also investigated the case where . They found that if one can replace with in Carleman's inequality, but that if then remained the best possible constant.
Notes |
https://en.wikipedia.org/wiki/Conditional%20quantifier | In logic, a conditional quantifier is a kind of Lindström quantifier (or generalized quantifier) QA that, relative to a classical model A, satisfies some or all of the following conditions ("X" and "Y" range over arbitrary formulas in one free variable):
(The implication arrow denotes material implication in the metalanguage.) The minimal conditional logic M is characterized by the first six properties, and stronger conditional logics include some of the other ones. For example, the quantifier ∀A, which can be viewed as set-theoretic inclusion, satisfies all of the above except [symmetry]. Clearly [symmetry] holds for ∃A while e.g. [contraposition] fails.
A semantic interpretation of conditional quantifiers involves a relation between sets of subsets of a given structure—i.e. a relation between properties defined on the structure. Some of the details can be found in the article Lindström quantifier.
Conditional quantifiers are meant to capture certain properties concerning conditional reasoning at an abstract level. Generally, it is intended to clarify the role of conditionals in a first-order language as they relate to other connectives, such as conjunction or disjunction. While they can cover nested conditionals, the greater complexity of the formula, specifically the greater the number of conditional nesting, the less helpful they are as a methodological tool for understanding conditionals, at least in some sense. Compare this methodological strategy for conditionals with that of first-degree entailment logics. |
https://en.wikipedia.org/wiki/BeeBase | BeeBase was an online bioinformatics database that hosted data related to Apis mellifera, the European honey bee along with some pathogens and other species. It was developed in collaboration with the Honey Bee Genome Sequencing Consortium. In 2020 it was archived and replaced by the Hymenoptera Genome Database.
Data and services
Biological data and services available on BeeBase included:
DNA and protein sequence data
official bee gene set (developed by and hosted at Beebase)
genome browser
linkage maps
server to search the honey bee genome using BLAST
Services
In Feb 2007, BeeBase consisted of a GBrowser-based genome viewer and a Cmap-based comparative map viewer, both modules of the Generic Model Organism Database (GMOD) project. The genome viewer included tracks for known honey bee genes, predicted gene sets (Ensembl, NCBI, EMBL-Heidelberg), STS markers (Solignac and Hunt linkage maps), honey bee expressed sequence tags (ESTs), homologs in fruit fly, mosquito and other insects and transposable elements. The honey bee comparative map viewer displayed linkage maps and the physical map (genome assembly), highlighting markers that are common among maps.
Additionally, a QTL viewer and a gene expression database were planned. The genome sequence was to serve as a reference to link these diverse data types.
Beebase organized the community annotation of the bee genome in collaboration with Baylor College of Medicine Human Genome Sequencing Center.
Data
The now archived site hosts the genome sequence for apis mellifera along with those of the following pathogens:
Bombus terrestris
Bombus impatiens
Two additional species were under analysis:
Apis dorsata
Apis florea
See also
Wormbase
Flybase
Xenbase |
https://en.wikipedia.org/wiki/CRAL-TRIO%20domain | CRAL-TRIO domain is a protein structural domain that binds small lipophilic molecules. This domain is named after cellular retinaldehyde-binding protein (CRALBP) and TRIO guanine exchange factor.
CRALB protein carries 11-cis-retinol or 11-cis-retinaldehyde. It modulates interaction of retinoids with visual cycle enzymes. TRIO is involved in coordinating actin remodeling, which is necessary for cell migration and growth.
Other members of the family are alpha-tocopherol transfer protein and phosphatidylinositol-transfer protein (Sec14). They transport their substrates (alpha-tocopherol and phosphatidylinositol or phosphatidylcholine, respectively) between different intracellular membranes. Family also include a guanine nucleotide exchange factor that may function as an effector of RAC1 small G-protein.
The N-terminal domain of yeast ECM25 protein has been identified as containing a lipid binding CRAL-TRIO domain.
Structure
The Sec14 protein was the first CRAL-TRIO domain for which the structure was determined. The structure contains several alpha helices as well as a beta sheet composed of 6 strands. Strands 2,3,4 and 5 form a parallel beta sheet with strands 1 and 6 being anti-parallel. The structure also identified a hydrophobic binding pocket for lipid binding.
Human proteins containing this domain
C20orf121; MOSPD2; PTPN9; RLBP1; RLBP1L1; RLBP1L2; SEC14L1; SEC14L2;
SEC14L3; SEC14L4; TTPA; |
https://en.wikipedia.org/wiki/Sterol%20carrier%20protein | Sterol carrier proteins (also known as nonspecific lipid transfer proteins) is a family of proteins that transfer steroids and probably also phospholipids and gangliosides between cellular membranes.
These proteins are different from plant nonspecific lipid transfer proteins but structurally similar to small proteins of unknown function from Thermus thermophilus.
This domain is involved in binding sterols. The human sterol carrier protein 2 (SCP2) is a basic protein that is believed to participate in the intracellular transport of cholesterol and various other lipids.
Human proteins containing this domain
HSD17B4; HSDL2; SCP2; STOML1;
See also
Steroidogenic acute regulatory protein and START domain |
https://en.wikipedia.org/wiki/Gametic%20phase | In genetics, a gametic phase represents the original allelic combinations that a diploid individual inherits from both parents. It is therefore a particular association of alleles at different loci on the same chromosome. Gametic phase is influenced by genetic linkage. |
https://en.wikipedia.org/wiki/Forest%20steppe | A forest steppe is a temperate-climate ecotone and habitat type composed of grassland interspersed with areas of woodland or forest.
Locations
Forest steppe primarily occurs in a belt of forest steppes across northern Eurasia from the eastern lowlands of Europe to eastern Siberia in northeast Asia. It forms transition ecoregions between the temperate grasslands and temperate broadleaf and mixed forests biomes. Much of Russia belongs to the forest steppe zone, stretches from Central Russia, across Volga, Ural, Siberian and Far East Russia.
In upper North America another example of the forest steppe ecotone is the aspen parkland, in the central Prairie Provinces, northeastern British Columbia, North Dakota, and Minnesota. It is the transition ecoregion from the Great Plains prairie and steppe temperate grasslands to the Taiga biome forests in the north.
In central Asia the forest steppe ecotone is found in ecoregions in the mountains of the Iranian Plateau, in Iran, Afghanistan, and Balochistan.
Forest steppe ecoregions
East European forest steppe forms a transition between the Central European and Sarmatic mixed forests to the north and the Pontic–Caspian steppe to the south. It extends from Romania in the west to the Ural Mountains in the east.
The Kazakh forest steppe lies east of the Urals, between the West Siberian broadleaf and mixed forests and the Kazakh steppe.
Altai montane forest and forest steppe
The Southern Siberian rainforest includes forest-steppe areas.
Selenge-Orkhon forest steppe
The Daurian forest steppe lies between the Trans-Baikal conifer forests and East Siberian Taiga to the north and the Mongolian-Manchurian grassland to the south.
Zagros Mountains forest steppe
Elburz Range forest steppe
Kopet Dag woodlands and forest steppe
Kuhrud-Kohbanan Mountains forest steppe
Canadian Aspen forests and parklands—North Dakota, Minnesota, and Canada
External links |
https://en.wikipedia.org/wiki/Whitney%20conditions | In differential topology, a branch of mathematics, the Whitney conditions are conditions on a pair of submanifolds of a manifold introduced by Hassler Whitney in 1965.
A stratification of a topological space is a finite filtration by closed subsets Fi , such that the difference between successive members Fi and F(i − 1) of the filtration is either empty or a smooth submanifold of dimension i. The connected components of the difference Fi − F(i − 1) are the strata of dimension i. A stratification is called a Whitney stratification if all pairs of strata satisfy the Whitney conditions A and B, as defined below.
The Whitney conditions in Rn
Let X and Y be two disjoint (locally closed) submanifolds of Rn, of dimensions i and j.
X and Y satisfy Whitney's condition A if whenever a sequence of points x1, x2, … in X converges to a point y in Y, and the sequence of tangent i-planes Tm to X at the points xm converges to an i-plane T as m tends to infinity, then T contains the tangent j-plane to Y at y.
X and Y satisfy Whitney's condition B if for each sequence x1, x2, … of points in X and each sequence y1, y2, … of points in Y, both converging to the same point y in Y, such that the sequence of secant lines Lm between xm and ym converges to a line L as m tends to infinity, and the sequence of tangent i-planes Tm to X at the points xm converges to an i-plane T as m tends to infinity, then L is contained in T.
John Mather first pointed out that Whitney's condition B implies Whitney's condition A in the notes of his lectures at Harvard in 1970, which have been widely distributed. He also defined the notion of Thom–Mather stratified space, and proved that every Whitney stratification is a Thom–Mather stratified space and hence is a topologically stratified space. Another approach to this fundamental result was given earlier by René Thom in 1969.
David Trotman showed in his 1977 Warwick thesis that a stratification of a closed subset in a smooth manifold M satisfies Whi |
https://en.wikipedia.org/wiki/Indicators%20of%20spatial%20association | Indicators of spatial association are statistics that evaluate the existence of clusters in the spatial arrangement of a given variable. For instance, if we are studying cancer rates among census tracts in a given city local clusters in the rates mean that there are areas that have higher or lower rates than is to be expected by chance alone; that is, the values occurring are above or below those of a random distribution in space.
Global indicators
Notable global indicators of spatial association include:
Global Moran's I: The most commonly used measure of global spatial autocorrelation or the overall clustering of the spatial data developed by Patrick Alfred Pierce Moran.
Geary's C (Geary's Contiguity Ratio): A measure of global spatial autocorrelation developed by Geary in 1954. It is inversely related to Moran's I, but more sensitive to local autocorrelation than Moran's I.
Getis–Ord G (Getis–Ord global G, Geleral G-Statistic): Introduced by Getis and Ord in 1992 to supplement Moran's I.
Local indicators
Notable local indicators of spatial association (LISA) include:
Local Moran's I: Derived from Global Moran's I, it was introduced by Luc Anselin in 1995 and can be computed using GeoDa.
Getis–Ord Gi (local Gi): Developed by Getis and Ord based on their global G.
INDICATE's IN: Originally developed to assess the spatial behaviour of stars, can be computed for any discrete 2+D dataset using python-based INDICATE tool available from GitHub.
See also
Spatial analysis
Tobler's first law of geography |
https://en.wikipedia.org/wiki/Imagination | Imagination is the production of sensations, feelings and thoughts informing oneself. These experiences can be re-creations of past experiences, such as vivid memories with imagined changes, or completely invented and possibly fantastic scenes. Imagination helps apply knowledge to solve problems and is fundamental to integrating experience and the learning process. As a way of building theory, it is called "disciplined imagination". A way of training imagination is by listening to storytelling (narrative), in which the exactness of the chosen words is how it can "evoke worlds".
One view of imagination links it with cognition,
seeing imagination as a cognitive process used in mental functioning. It is used — in the form of visual imagery — by clinicians in psychological treatment.
Imaginative thought may become associated with rational thought on the assumption that both activities involve cognitive processes that "underpin thinking about possibilities".
The cognate term, "mental imagery" may be used in psychology to denote the process of reviving in the mind recollections of objects formerly given in sense perception. Since this use of the term conflicts with that of ordinary language, some psychologists have preferred to describe this process as "imaging" or "imagery" or to speak of it as "reproductive" as opposed to "productive" or "constructive" imagination. Constructive imagination is further divided into voluntary imagination driven by the lateral prefrontal cortex (LPFC) and involuntary imagination (LPFC-independent), such as REM sleep dreaming, daydreaming, hallucinations, and spontaneous insight. The voluntary types of imagination include integration of modifiers, and mental rotation. Imagined images, both novel and recalled, are seen with the "mind's eye".
Imagination, however, is not considered to be exclusively a cognitive activity because it is also linked to the body and place, particularly in that it also involves setting up relationships with mater |
https://en.wikipedia.org/wiki/Medial%20eminence%20of%20floor%20of%20fourth%20ventricle | In the human brain, the rhomboid fossa is divided into symmetrical halves by a median sulcus which reaches from the upper to the lower angles of the fossa and is deeper below than above. On either side of this sulcus is an elevation, the medial eminence, bounded laterally by a sulcus, the sulcus limitans.
In the superior part of the fossa the medial eminence has a width equal to that of the corresponding half of the fossa, but opposite the superior fovea it forms an elongated swelling, the colliculus facialis, which overlies the nucleus of the abducent nerve, and is, in part at least, produced by the internal genu of the facial nerve. |
https://en.wikipedia.org/wiki/Boltzmann%27s%20entropy%20formula | In statistical mechanics, Boltzmann's equation (also known as the Boltzmann–Planck equation) is a probability equation relating the entropy , also written as , of an ideal gas to the multiplicity (commonly denoted as or ), the number of real microstates corresponding to the gas's macrostate:
where is the Boltzmann constant (also written as simply ) and equal to 1.380649 × 10−23 J/K, and is the natural logarithm function (also written as , as in the image above).
In short, the Boltzmann formula shows the relationship between entropy and the number of ways the atoms or molecules of a certain kind of thermodynamic system can be arranged.
History
The equation was originally formulated by Ludwig Boltzmann between 1872 and 1875, but later put into its current form by Max Planck in about 1900. To quote Planck, "the logarithmic connection between entropy and probability was first stated by L. Boltzmann in his kinetic theory of gases".
A 'microstate' is a state specified in terms of the constituent particles of a body of matter or radiation that has been specified as a macrostate in terms of such variables as internal energy and pressure. A macrostate is experimentally observable, with at least a finite extent in spacetime. A microstate can be instantaneous, or can be a trajectory composed of a temporal progression of instantaneous microstates. In experimental practice, such are scarcely observable. The present account concerns instantaneous microstates.
The value of was originally intended to be proportional to the Wahrscheinlichkeit (the German word for probability) of a macroscopic state for some probability distribution of possible microstates—the collection of (unobservable microscopic single particle) "ways" in which the (observable macroscopic) thermodynamic state of a system can be realized by assigning different positions and momenta to the respective molecules.
There are many instantaneous microstates that apply to a given macrostate. Boltzmann consid |
https://en.wikipedia.org/wiki/Cryogenic%20grinding | Cryogenic grinding, also known as freezer milling, freezer grinding, and cryomilling, is the act of cooling or chilling a material and then reducing it into a small particle size. For example, thermoplastics are difficult to grind to small particle sizes at ambient temperatures because they soften, adhere in lumpy masses and clog screens. When chilled by dry ice, liquid carbon dioxide or liquid nitrogen, the thermoplastics can be finely ground to powders suitable for electrostatic spraying and other powder processes. Cryogenic grinding of plant and animal tissue is a technique used by microbiologists. Samples that require extraction of nucleic acids must be kept at −80 °C or lower during the entire extraction process. For samples that are soft or flexible at room temperature, cryogenic grinding may be the only viable technique for processing samples. A number of recent studies report on the processing and behavior of nanostructured materials via cryomilling.
Freezer milling
Freezer milling is a type of cryogenic milling that uses a solenoid to mill samples. The solenoid moves the grinding media back and forth inside the vial, grinding the sample down to analytical fineness. This type of milling is especially useful in milling temperature sensitive samples, as samples are milled at liquid nitrogen temperatures. The idea behind using a solenoid is that the only "moving part" in the system is the grinding media inside the vial. The reason for this is that at liquid nitrogen temperatures (–196°C) any moving part will come under huge stress leading to potentially poor reliability. Cryogenic milling using a solenoid has been used for over 50 years and has been proved to be a very reliable method of processing temperature sensitive samples in the laboratory.
Cryomilling
Cryomilling is a variation of mechanical milling, in which metallic powders or other samples (e.g. temperature sensitive samples and samples with volatile components) are milled in a cryogen (usually liqu |
https://en.wikipedia.org/wiki/CBS%20Laboratories | CBS Laboratories or CBS Labs (later known as the CBS Technology Center or CTC) was the technology research and development organization of the CBS television network. Innovations developed at the labs included many groundbreaking broadcast, industrial, military, and consumer technologies.
History
CBS Laboratories was established in 1936 in New York City to conduct technological research for CBS and outside clients.
The CBS Laboratories Division (CLD) moved from Madison Avenue in New York to a new facility in Stamford, Connecticut in 1958.
Dr. Peter Goldmark joined CBS Laboratories in 1936. On September 4, 1940, while working at the lab, he demonstrated the Field-Sequential Color TV system. It utilized a mechanical color wheel on both the camera and on the television home receiver, but was not compatible with the existing post-war NTSC, 525-line, 60-field/second black and white TV sets as it was a 405-line, 144-field scanning system. It was the first color broadcasting system that received FCC approval in 1950, and the CBS Television Network began broadcasting in color on November 20, 1950. However, no other TV set manufacturers made the sets, and CBS stopped broadcasting in field-sequential color on October 21, 1951.
Goldmark’s interest in recorded music led to the development of the long-playing (LP) 33-1/3 rpm vinyl record, which became the standard for incorporating multiple or lengthy recorded works on a single audio disc for two generations. The LP was introduced to the market place by Columbia Records in 1948.
In 1959 the CBS Audimax I Audio Gain Controller was introduced. It was the first of its kind in the broadcasting industry.
In the 1960s the CBS Volumax Audio FM Peak Limiter was introduced, also the first of its kind in the broadcasting industry.
Electronic Video Recording was announced in 1967.
In 1966, the CBS Vidifont was invented. It was the first electronic graphics generator used in television production. Brought to the marketplace at the NA |
https://en.wikipedia.org/wiki/Buccopharyngeal%20fascia | The buccopharyngeal fascia is a fascia of the pharynx. It represents the posterior portion of the pretracheal fascia (visceral fascia). It covers the superior pharyngeal constrictor muscles, and buccinator muscle.
Structure
The buccopharyngeal fascia is a thin lamina given off from the pretracheal fascia. It is the portion of the pretracheal fascia situated posterior and lateral to the pharynx. It encloses the entire superior part of the alimentary canal.
The buccopharyngeal fascia envelops the superior pharyngeal constrictor muscles. It extends anteriorly from the constrictor pharyngis superior over the pterygomandibular raphe to cover the buccinator muscle (though another source describes it as continuous with the fascia covering the buccinator muscle).
Attachments
It is attached to the prevertebral fascia by loose connective tissue, with the retropharyngeal space found between them. It may also be attached to the alar fascia posteriorly at C3 and C6 levels.
Relations
The thyroid gland wraps around the trachea and oesophagus anterior to the buccopharyngeal fascia, so that the lateral parts of the thyroid gland border it.
The buccopharyngeal fascia runs parallel to the medial aspect of the carotid sheath.
Additional images
See also
Pharyngobasilar fascia |
https://en.wikipedia.org/wiki/Train%20horn | A train horn is an air horn used as an audible warning device on diesel and electric-powered trains. Its primary purpose is to alert persons and animals to an oncoming train, especially when approaching a level crossing. They are often extremely loud, allowing them to be heard from great distances. They are also used for acknowledging signals given by railroad employees, such as during switching operations. For steam locomotives, the equivalent device is a train whistle.
History and background
Since trains move on fixed rails, they are uniquely susceptible to collision. This is exacerbated by the train's enormous weight and inertia, which make it difficult to quickly stop when encountering an obstacle. Also, trains generally do not stop at level crossings, instead relying on pedestrians and vehicles to clear the tracks when they pass. Therefore, from their beginnings, locomotives have been equipped with loud horns or bells to warn vehicles and pedestrians that they are coming. Steam locomotives had steam whistles, operated from steam produced by their boilers.
As diesel locomotives began to replace steam on most railroads during the mid-20th century, it was realized that the new locomotives were unable to utilize the steam whistles then in use. Early internal combustion locomotives were initially fitted with small truck horns or exhaust-powered whistles, but these were found to be unsuitable and hence the air horn design was scaled up and modified for railroad use. Early train horns often were tonally similar to the air horns still heard on road-going trucks today. It was found that this caused some confusion among people who were accustomed to steam locomotives and the sound of their whistles; when approaching a grade crossing, when some people heard an air horn they expected to see a truck, not a locomotive, and accidents happened. So, locomotive air horns were created that had a much higher, more musical note, tonally much more like a steam whistle. This is |
https://en.wikipedia.org/wiki/Induction%20generator | An induction generator or asynchronous generator is a type of alternating current (AC) electrical generator that uses the principles of induction motors to produce electric power. Induction generators operate by mechanically turning their rotors faster than synchronous speed. A regular AC induction motor usually can be used as a generator, without any internal modifications. Because they can recover energy with relatively simple controls, induction generators are useful in applications such as mini hydro power plants, wind turbines, or in reducing high-pressure gas streams to lower pressure.
An induction generator draws reactive excitation current from an external source. Induction generators have an AC rotor and cannot bootstrap using residual magnetization to black start a de-energized distribution system as synchronous machines do. Power factor correcting capacitors can be added externally to neutralize a constant amount of the variable reactive excitation current. After starting, an induction generator can use a capacitor bank to produce reactive excitation current, but the isolated power system’s voltage and frequency are not self-regulating and destabilize readily.
Principle of operation
An induction generator produces electrical power when its rotor is turned faster than the synchronous speed. For a four-pole motor (two pairs of poles on stator) powered by a 60 Hz source, the synchronous speed is 1800 rotations per minute (rpm) and 1500 RPM powered at 50Hz. The motor always turns slightly slower than the synchronous speed. The difference between synchronous and operating speed is called "slip" and is often expressed as per cent of the synchronous speed. For example, a motor operating at 1450 RPM that has a synchronous speed of 1500 RPM is running at a slip of +3.3%.
In operation as a motor, the stator flux rotation is at the synchronous speed, which is faster than the rotor speed. This causes the stator flux to cycle at the slip frequency inducing rotor |
https://en.wikipedia.org/wiki/Molecular%20Systems%20Biology | Molecular Systems Biology is a peer-reviewed open-access scientific journal covering systems biology at the molecular level (examples include: genomics, proteomics, metabolomics, microbial systems, the integration of cell signaling and regulatory networks), synthetic biology, and systems medicine. It was established in 2005 and published by the Nature Publishing Group on behalf of the European Molecular Biology Organization. As of December 2013, it is published by EMBO Press. |
https://en.wikipedia.org/wiki/Visibility%20%28geometry%29 | In geometry, visibility is a mathematical abstraction of the real-life notion of visibility.
Given a set of obstacles in the Euclidean space, two points in the space are said to be visible to each other, if the line segment that joins them does not intersect any obstacles. (In the Earth's atmosphere light follows a slightly curved path that is not perfectly predictable, complicating the calculation of actual visibility.)
Computation of visibility is among the basic problems in computational geometry and has applications in computer graphics, motion planning, and other areas.
Concepts and problems
Point visibility
Edge visibility
Visibility polygon
Weak visibility
Art gallery problem or museum problem
Visibility graph
Visibility graph of vertical line segments
Watchman route problem
Computer graphics applications:
Hidden surface determination
Hidden line removal
z-buffering
portal engine
Star-shaped polygon
Kernel of a polygon
Isovist
Viewshed
Zone of Visual Influence
Painter's algorithm |
https://en.wikipedia.org/wiki/A/B%20testing | A/B testing (also known as bucket testing, split-run testing, or split testing) is a user experience research methodology. A/B tests consist of a randomized experiment that usually involves two variants (A and B), although the concept can be also extended to multiple variants of the same variable. It includes application of statistical hypothesis testing or "two-sample hypothesis testing" as used in the field of statistics. A/B testing is a way to compare multiple versions of a single variable, for example by testing a subject's response to variant A against variant B, and determining which of the variants is more effective.
Overview
"A/B testing" is a shorthand for a simple randomized controlled experiment, in which a number of samples (e.g. A and B) of a single vector-variable are compared. These values are similar except for one variation which might affect a user's behavior. A/B tests are widely considered the simplest form of controlled experiment, especially when they only involve two variants. However, by adding more variants to the test, its complexity grows.
A/B tests are useful for understanding user engagement and satisfaction of online features like a new feature or product. Large social media sites like LinkedIn, Facebook, and Instagram use A/B testing to make user experiences more successful and as a way to streamline their services.
Today, A/B tests are being used also for conducting complex experiments on subjects such as network effects when users are offline, how online services affect user actions, and how users influence one another. A/B testing is used by data engineers, marketers, designers, software engineers, and entrepreneurs, among others. Many positions rely on the data from A/B tests, as they allow companies to understand growth, increase revenue, and optimize customer satisfaction.
Version A might be used at present (thus forming the control group), while version B is modified in some respect vs. A (thus forming the treatment group) |
https://en.wikipedia.org/wiki/Leibniz%E2%80%93Clarke%20correspondence | The Leibniz–Clarke correspondence was a scientific, theological and philosophical debate conducted in an exchange of letters between the German thinker Gottfried Wilhelm Leibniz and Samuel Clarke, an English supporter of Isaac Newton during the years 1715 and 1716. The exchange began because of a letter Leibniz wrote to Caroline of Ansbach, in which he remarked that Newtonian physics was detrimental to natural theology. Eager to defend the Newtonian view, Clarke responded, and the correspondence continued until the death of Leibniz in 1716.
Although a variety of subjects are touched on in the letters, the main interest for modern readers is in the dispute between the absolute theory of space favoured by Newton and Clarke, and Leibniz's relational approach. Also important is the conflict between Clarke's and Leibniz's opinions on free will and whether God must create the best of all possible worlds.
Leibniz had published only one book on moral matters, the Theodicée (1710), and his more metaphysical views had never been exposed to a sufficient extent, so the collected letters were met with interest by their contemporaries. The primary dispute between Leibniz and Newton about calculus was still fresh in the public's mind and it was taken as a matter of course that it was Newton himself who stood behind Clarke's replies.
Editions
The Leibniz-Clarke letters were first published under Clarke's name in the year following Leibniz's death. Clarke wrote a preface, took care of the translation from French, added notes and some of his own writing. In 1720 Pierre Desmaizeaux published a similar volume in a French translation, including quotes from Newton's work. It is quite certain that for both editions the opinion of Newton himself has been sought and Leibniz left at a disadvantage. However the German translation of the correspondence published by Kohler, also in 1720, contained a reply to Clarke's last letter which Leibniz had not been able to answer, due to his death |
https://en.wikipedia.org/wiki/Fauces%20%28throat%29 | The fauces, isthmus of fauces, or the oropharyngeal isthmus, is the opening at the back of the mouth into the throat. It is a narrow passage between the velum and the base of the tongue.
The fauces is a part of the oropharynx directly behind the oral cavity as a subdivision, bounded superiorly by the soft palate, laterally by the palatoglossal and palatopharyngeal arches, and inferiorly by the tongue. The arches form the pillars of the fauces. The anterior pillar is the palatoglossal arch formed of the palatoglossus muscle. The posterior pillar is the palatopharyngeal arch formed of the palatopharyngeus muscle. Between these two arches on the lateral walls of the oropharynx is the tonsillar fossa which is the location of the palatine tonsil. The arches are also known together as the palatine arches.
Each arch runs downwards, laterally and forwards, from the soft palate to the side of the tongue. The approximation of the arches due to the contraction of the palatoglossal muscles constricts the fauces, and is essential to swallowing.
Faucitis
Inflammation of the fauces, known as faucitis, is seen in animals. In cats, faucitis is usually a secondary disease to gingivitis but can be a primary disease. In this species faucitis is usually caused by bacterial and viral infections although food allergies need to be excluded in any diagnosis. Treatment is symptomatic and includes broad-spectrum antibiotics and in severe cases where cats are inappetant, corticosteroids (often given as depot forms, e.g. depomedrol) or chemotherapy (e.g. chlorambucil).
See also
List of anatomical isthmi |
https://en.wikipedia.org/wiki/Interferon%20gamma%20release%20assay | Interferon-γ release assays (IGRA) are medical tests used in the diagnosis of some infectious diseases, especially tuberculosis. Interferon-γ (IFN-γ) release assays rely on the fact that T-lymphocytes will release IFN-γ when exposed to specific antigens. These tests are mostly developed for the field of tuberculosis diagnosis, but in theory, may be used in the diagnosis of other diseases which rely on cell-mediated immunity, e.g. cytomegalovirus and leishmaniasis and COVID-19. For example, in patients with cutaneous adverse drug reactions, challenge of peripheral blood lymphocytes with the drug causing the reaction produced a positive test result for half of the drugs tested.
There are currently two IFN-γ release assays available for the diagnosis of tuberculosis:
QuantiFERON-TB Gold (licensed in US, Europe and Japan); and
T-SPOT.TB, a form of ELISpot, the variant of ELISA (licensed in Europe, US, Japan and China).
The former test quantitates the amount of IFN-γ produced in response to the ESAT-6 and CFP-10 antigens from Mycobacterium tuberculosis, which are distinguishable from those present in BCG and most other non-tuberculous mycobacteria. The latter test determines the total number of individual effector T cells expressing IFN-γ.
The indications for the test are still disputed. It has been evaluated for the diagnosis of latent tuberculosis in HIV patients (who frequently have a negative Mantoux test). |
https://en.wikipedia.org/wiki/Koinophilia | Koinophilia is an evolutionary hypothesis proposing that during sexual selection, animals preferentially seek mates with a minimum of unusual or mutant features, including functionality, appearance and behavior. Koinophilia intends to explain the clustering of sexual organisms into species and other issues described by Darwin's Dilemma. The term derives from the Greek word koinos meaning "common" or "that which is shared", and philia, meaning "fondness".
Natural selection causes beneficial inherited features to become more common at the expense of their disadvantageous counterparts. The koinophilia hypothesis proposes that a sexually-reproducing animal would therefore be expected to avoid individuals with rare or unusual features, and to prefer to mate with individuals displaying a predominance of common or average features. Mutants with strange, odd or peculiar features would be avoided because most mutations that manifest themselves as changes in appearance, functionality or behavior are disadvantageous. Because it is impossible to judge whether a new mutation is beneficial (or might be advantageous in the unforeseeable future) or not, koinophilic animals avoid them all, at the cost of avoiding the very occasional potentially beneficial mutation. Thus, koinophilia, although not infallible in its ability to distinguish fit from unfit mates, is a good strategy when choosing a mate. A koinophilic choice ensures that offspring are likely to inherit a suite of features and attributes that have served all the members of the species well in the past.
Koinophilia differs from the "like prefers like" mating pattern of assortative mating. If like preferred like, leucistic animals (such as white peacocks) would be sexually attracted to one another, and a leucistic subspecies would come into being. Koinophilia predicts that this is unlikely because leucistic animals are attracted to the average in the same way as are all the other members of its species. Since non-leucistic |
https://en.wikipedia.org/wiki/Nuclear%20matter | Nuclear matter is an idealized system of interacting nucleons (protons and neutrons) that exists in several phases of exotic matter that, as of yet, are not fully established. It is not matter in an atomic nucleus, but a hypothetical substance consisting of a huge number of protons and neutrons held together by only nuclear forces and no Coulomb forces. Volume and the number of particles are infinite, but the ratio is finite. Infinite volume implies no surface effects and translational invariance (only differences in position matter, not absolute positions).
A common idealization is symmetric nuclear matter, which consists of equal numbers of protons and neutrons, with no electrons.
When nuclear matter is compressed to sufficiently high density, it is expected, on the basis of the asymptotic freedom of quantum chromodynamics, that it will become quark matter, which is a degenerate Fermi gas of quarks.
Some authors use "nuclear matter" in a broader sense, and refer to the model described above as "infinite nuclear matter", and consider it as a "toy model", a testing ground for analytical techniques. However, the composition of a neutron star, which requires more than neutrons and protons, is not necessarily locally charge neutral, and does not exhibit translation invariance, often is differently referred to, for example, as neutron star matter or stellar matter and is considered distinct from nuclear matter. In a neutron star, pressure rises from zero (at the surface) to an unknown large value in the center.
Methods capable of treating finite regions have been applied to stars and to atomic nuclei. One such model for finite nuclei is the liquid drop model, which includes surface effects and Coulomb interactions.
See also
QCD vacuum
Quark–gluon plasma
Degenerate matter
Neutron-degenerate matter
Strange matter
Nuclear structure
Neutronium
Nuclear physics
Nuclear spectroscopy |
https://en.wikipedia.org/wiki/Polyclonal%20B%20cell%20response | Polyclonal B cell response is a natural mode of immune response exhibited by the adaptive immune system of mammals. It ensures that a single antigen is recognized and attacked through its overlapping parts, called epitopes, by multiple clones of B cell.
In the course of normal immune response, parts of pathogens (e.g. bacteria) are recognized by the immune system as foreign (non-self), and eliminated or effectively neutralized to reduce their potential damage. Such a recognizable substance is called an antigen. The immune system may respond in multiple ways to an antigen; a key feature of this response is the production of antibodies by B cells (or B lymphocytes) involving an arm of the immune system known as humoral immunity. The antibodies are soluble and do not require direct cell-to-cell contact between the pathogen and the B-cell to function.
Antigens can be large and complex substances, and any single antibody can only bind to a small, specific area on the antigen. Consequently, an effective immune response often involves the production of many different antibodies by many different B cells against the same antigen. Hence the term "polyclonal", which derives from the words poly, meaning many, and clones from Greek klōn, meaning sprout or twig; a clone is a group of cells arising from a common "mother" cell. The antibodies thus produced in a polyclonal response are known as polyclonal antibodies. The heterogeneous polyclonal antibodies are distinct from monoclonal antibody molecules, which are identical and react against a single epitope only, i.e., are more specific.
Although the polyclonal response confers advantages on the immune system, in particular, greater probability of reacting against pathogens, it also increases chances of developing certain autoimmune diseases resulting from the reaction of the immune system against native molecules produced within the host.
Humoral response to infection
Diseases which can be transmitted from one organism to |
https://en.wikipedia.org/wiki/Multidelay%20block%20frequency%20domain%20adaptive%20filter | The multidelay block frequency domain adaptive filter (MDF) algorithm is a block-based frequency domain implementation of the (normalised) Least mean squares filter (LMS) algorithm.
Introduction
The MDF algorithm is based on the fact that convolutions may be efficiently computed in the frequency domain (thanks to the fast Fourier transform). However, the algorithm differs from the fast LMS algorithm in that block size it uses may be smaller than the filter length. If both are equal, then MDF reduces to the FLMS algorithm.
The advantages of MDF over the (N)LMS algorithm are:
Lower algorithmic complexity
Partial de-correlation of the input (which 'may' lead to faster convergence)
Variable definitions
Let be the length of the processing blocks, be the number of blocks and denote the 2Nx2N Fourier transform matrix. The variables are defined as:
With normalisation matrices and :
In practice, when multiplying a column vector by , we take the inverse FFT of , set the first values in the result to zero and then take the FFT. This is meant to remove the effects of the circular convolution.
Algorithm description
For each block, the MDF algorithm is computed as:
It is worth noting that, while the algorithm is more easily expressed in matrix form, the actual implementation requires no matrix multiplications. For instance the normalisation matrix computation reduces to an element-wise vector multiplication because is block-diagonal. The same goes for other multiplications. |
https://en.wikipedia.org/wiki/Lipman%20Bers | Lipman Bers (Latvian: Lipmans Berss; May 22, 1914 – October 29, 1993) was a Latvian-American mathematician, born in Riga, who created the theory of pseudoanalytic functions and worked on Riemann surfaces and Kleinian groups. He was also known for his work in human rights activism.
Biography
Bers was born in Riga, then under the rule of the Russian Czars, and spent several years as a child in Saint Petersburg; his family returned to Riga in approximately 1919, by which time it was part of independent Latvia. In Riga, his mother was the principal of a Jewish elementary school, and his father became the principal of a Jewish high school, both of which Bers attended, with an interlude in Berlin while his mother, by then separated from his father, attended the Berlin Psychoanalytic Institute. After high school, Bers studied at the University of Zurich for a year, but had to return to Riga again because of the difficulty of transferring money from Latvia in the international financial crisis of the time. He continued his studies at the University of Riga, where he became active in socialist politics, including giving political speeches and working for an underground newspaper. In the aftermath of the Latvian coup in 1934 by right-wing leader Kārlis Ulmanis, Bers was targeted for arrest but fled the country, first to Estonia and then to Czechoslovakia.
Bers received his Ph.D. in 1938 from the University of Prague. He had begun his studies in Prague with Rudolf Carnap, but when Carnap moved to the US he switched to Charles Loewner, who would eventually become his thesis advisor. In Prague, he lived with an aunt, and married his wife Mary (née Kagan) whom he had met in elementary school and who had followed him from Riga. Having applied for postdoctoral studies in Paris, he was given a visa to go to France soon after the Munich Agreement, by which Nazi Germany annexed part of Czechoslovakia. He and his wife Mary had a daughter in Paris. They were unable to obtain a visa th |
https://en.wikipedia.org/wiki/Happy%20path | In the context of software or information modeling, a happy path (sometimes called happy flow) is a default scenario featuring no exceptional or error conditions. For example, the happy path for a function validating credit card numbers would be where none of the validation rules raise an error, thus letting execution continue successfully to the end, generating a positive response.
Process steps for a happy path are also used in the context of a use case. In contrast to the happy path, process steps for alternate flow and exception flow may also be documented.
Happy path test is a well-defined test case using known input, which executes without exception and produces an expected output. Happy path testing can show that a system meets its functional requirements but it doesn't guarantee a graceful handling of error conditions or aid in finding hidden bugs.
Happy day (or sunny day) scenario and golden path are slang synonyms for happy path.
In use case analysis, there is only one happy path, but there may be any number of additional alternate path scenarios which are all valid optional outcomes. If valid alternatives exist, the happy path is then identified as the default or most likely positive alternative. The analysis may also show one or more exception paths. An exception path is taken as the result of a fault condition. Use cases and the resulting interactions are commonly modeled in graphical languages such as the Unified Modeling Language (UML) or SysML.
Unhappy path
There is no agreed name for the opposite of happy paths: they may be known as sad paths, bad paths, or exception paths. The term 'unhappy path' is gaining popularity as it suggests a complete opposite to 'happy path' and retains the same context. Usually there is no extra 'unhappy path', leaving such 'term' meaningless, because the happy path reaches the utter end, but an 'unhappy path' is shorter, ends prematurely, and doesn't reach the desired end, i.e. not even the last page of a wizard. |
https://en.wikipedia.org/wiki/Dictionary%20of%20the%20Irish%20Language | Dictionary of the Irish Language: Based Mainly on Old and Middle Irish Materials (also called "the DIL"), published by the Royal Irish Academy, is the definitive dictionary of the origins of the Irish language, specifically the Old Irish, Middle Irish, and Early Modern Irish stages up to c. 1700; the modern language is not included. The original idea for a comprehensive dictionary of early Irish was conceived in 1852 by the two pre-eminent Irish linguists of the time, John O'Donovan and Eugene O'Curry; however, it was more than sixty years until the first fascicle (the letter D as far as the word , compiled by Carl J. S. Marstrander) was published in 1913. It was more than sixty years again until the final fascicle (only one page long and consisting of words beginning with H) was published in 1976 under the editorship of E. G. Quin.
The full dictionary comprises about 2500 pages, but a compact edition (four original pages photoreduced onto one page) was published in 1983 (), and the decision was made to discontinue printing the full-size edition.
eDIL
A web site has been established to permit scholars to submit annotations for the DIL.
As a result of a project started in 2003, the online edition, known as the electronic Dictionary of the Irish Language (or eDIL), was launched in the Royal Irish Academy on the 27 June 2007. The launch was organised by the Foclóir na Nua-Ghaeilge team in the academy. |
https://en.wikipedia.org/wiki/Charles%20Loewner | Charles Loewner (29 May 1893 – 8 January 1968) was an American mathematician. His name was Karel Löwner in Czech and Karl Löwner in German.
Karl Loewner was born into a Jewish family in Lany, about 30 km from Prague, where his father Sigmund Löwner was a store owner.
Loewner received his Ph.D. from the University of Prague in 1917 under supervision of Georg Pick.
One of his central mathematical contributions is the proof of the Bieberbach conjecture in the first highly nontrivial case of the third coefficient. The technique he introduced, the Loewner differential equation, has had far-reaching implications in geometric function theory; it was used in the final solution of the Bieberbach conjecture by Louis de Branges in 1985. Loewner worked at the University of Berlin, University of Prague, University of Louisville, Brown University, Syracuse University and eventually at Stanford University. His students include Lipman Bers, Roger Horn, Adriano Garsia, and P. M. Pu.
Loewner's torus inequality
In 1949 Loewner proved his torus inequality, to the effect that every metric on the 2-torus satisfies the optimal inequality
where sys is its systole. The boundary case of equality is attained if and only if the metric is flat and homothetic to the so-called equilateral torus, i.e. torus whose group of deck transformations is precisely the hexagonal lattice spanned by the cube roots of unity in .
Loewner matrix theorem
The Loewner matrix (in linear algebra) is a square matrix or, more specifically, a linear operator (of real functions) associated with 2 input parameters consisting of (1) a real continuously differentiable function on a subinterval of the real numbers and (2) an -dimensional vector with elements chosen from the subinterval; the 2 input parameters are assigned an output parameter consisting of an matrix.
Let be a real-valued function that is continuously differentiable on the open interval .
For any define the divided difference of at as
.
Given , th |
https://en.wikipedia.org/wiki/Henryk%20Iwaniec | Henryk Iwaniec (born October 9, 1947) is a Polish-American mathematician, and since 1987 a professor at Rutgers University.
Background and education
Iwaniec studied at the University of Warsaw, where he got his PhD in 1972 under Andrzej Schinzel. He then held positions at the Institute of Mathematics of the Polish Academy of Sciences until 1983 when he left Poland. He held visiting positions at the Institute for Advanced Study, University of Michigan, and University of Colorado Boulder before being appointed Professor of Mathematics at Rutgers University. He is a citizen of both Poland and the United States.
He and mathematician Tadeusz Iwaniec are twin brothers.
Work
Iwaniec studies both sieve methods and deep complex-analytic techniques, with an emphasis on the theory of automorphic forms and harmonic analysis.
In 1997, Iwaniec and John Friedlander proved that there are infinitely many prime numbers of the form . Results of this strength had previously been seen as completely out of reach: sieve theory—used by Iwaniec and Friedlander in combination with other techniques—cannot usually distinguish between primes and products of two primes, say.
In 2001, Iwaniec was awarded the seventh Ostrowski Prize. The prize citation read, in part, "Iwaniec's work is characterized by depth, profound understanding of the difficulties of a problem, and unsurpassed technique. He has made deep contributions to the field of analytic number theory, mainly in modular forms on and sieve methods."
Awards and honors
He became a fellow of the American Academy of Arts and Sciences in 1995. He was awarded the fourteenth Frank Nelson Cole Prize in Number Theory in 2002. In 2006, he became a member of the National Academy of Science. He received the Leroy P. Steele Prize for Mathematical Exposition in 2011. In 2012, he became a fellow of the American Mathematical Society. In 2015 he was awarded the Shaw Prize in Mathematics. In 2017, he was awarded the AMS Doob Prize (jointly with |
https://en.wikipedia.org/wiki/Magnesium%20citrate | Magnesium citrate is a magnesium preparation in salt form with citric acid in a 1:1 ratio (1 magnesium atom per citrate molecule). It contains 11.23% magnesium by weight.
The name "magnesium citrate" is ambiguous and sometimes may refer to other salts such as trimagnesium dicitrate which has a magnesium:citrate ratio of 3:2, or monomagnesium dicitrate with a ratio of 1:2, or a mix of two or three of the salts of magnesium and citric acid.
Magnesium citrate (sensu lato) is used medicinally as a saline laxative and to completely empty the bowel prior to a major surgery or colonoscopy. It is available without a prescription, both as a generic and under various brand names. It is also used in the pill form as a magnesium dietary supplement.
As a food additive, magnesium citrate is used to regulate acidity and is known as E number E345.
Mechanism of action
Magnesium citrate works by attracting water through the tissues by a process known as osmosis. Once in the intestine, it can attract enough water into the intestine to induce defecation. The additional water stimulates bowel motility. This means it can also be used to treat rectal and colon problems. Magnesium citrate functions best on an empty stomach, and should always be followed with a full (eight ounce or 250 ml) glass of water or juice to help counteract water loss and aid in absorption. Magnesium citrate solutions generally produce bowel movement in one-half to three hours.
There is an exothermic heat generation when water is added, which is "most disagreeable when taken orally."
Use and dosage
The maximum upper tolerance limit (UTL) for magnesium in supplement form for adults is 350 mg of elemental magnesium per day, according to the National Institutes of Health (NIH). In addition, according to the NIH, total dietary requirements for magnesium from all sources (in other words, food and supplements) is 320–420 mg of elemental magnesium per day, though there is no UT for dietary magnesium.
Laxative
Mag |
https://en.wikipedia.org/wiki/TOMLAB | The TOMLAB Optimization Environment is a modeling platform for solving applied optimization problems in MATLAB.
Description
TOMLAB is a general purpose development and modeling environment in MATLAB for research, teaching and practical solution of optimization problems. It enables a wider range of problems to be solved in MATLAB and provides many additional solvers.
Optimization problems supported
TOMLAB handles a wide range of problem types, among them:
Linear programming
Quadratic programming
Nonlinear programming
Mixed-integer programming
Mixed-integer quadratic programming with or without convex quadratic constraints
Mixed-integer nonlinear programming
Linear and nonlinear least squares with L1, L2 and infinity norm
Exponential data fitting
Global optimization
Semi-definite programming problem with bilinear matrix inequalities
Constrained goal attainment
Geometric programming
Genetic programming
Costly or expensive black-box global optimization
Nonlinear complementarity problems
Additional features
TOMLAB supports more areas than general optimization, for example:
Optimal control with PROPT using Gauss and Chebyshev collocation.
Automatic differentiation with MAD
Interface to AMPL
Further details
TOMLAB supports solvers like CPLEX, SNOPT, KNITRO and MIDACO. Each such solver can be called to solve one single model formulation. The supported solvers are appropriate for many problems, including linear programming, integer programming, and global optimization.
An interface to AMPL makes it possible to formulate the problem in an algebraic format. The MATLAB Compiler enables the user to build stand-alone solutions. Sister products are available for LabVIEW and Microsoft .NET.
Modeling is mainly facilitated by the TomSym class. |
https://en.wikipedia.org/wiki/Phantom%20reference | A phantom reference is a kind of reference in Java, where the memory can be reclaimed. The phantom reference is one of the strengths or levels of 'non strong' reference defined in the Java programming language; the others being weak and soft. Phantom reference are the weakest level of reference in Java; in order from strongest to weakest, they are: strong, soft, weak, phantom.
An object is phantomly referenced after it has been finalized.
In Java 8 and earlier versions, the reference needs to be cleared before the memory for a finalized referent can be reclaimed. A change in Java 9 will allow memory from a finalized referent to be reclaimable immediately.
Use
Phantom references are of limited use, primarily narrow technical uses. First, it can be used instead of a finalize method, guaranteeing that the object is not resurrected during finalization. This allows the object to be garbage collected in a single cycle, rather than needing to wait for a second GC cycle to ensure that it has not been resurrected. A second use is to detect exactly when an object has been removed from memory (by using in combination with a ReferenceQueue object), ensuring that its memory is available, for example deferring allocation of a large amount of memory (e.g., a large image) until previous memory is freed.
See also
Ephemeron
Weak reference
Soft reference
Circular reference |
https://en.wikipedia.org/wiki/Fujiki%20class%20C | In algebraic geometry, a complex manifold is called Fujiki class if it is bimeromorphic to a compact Kähler manifold. This notion was defined by Akira Fujiki.
Properties
Let M be a compact manifold of Fujiki class , and
its complex subvariety. Then X
is also in Fujiki class (, Lemma 4.6). Moreover, the Douady space of X (that is, the moduli of deformations of a subvariety , M fixed) is compact and in Fujiki class .
Fujiki class manifolds are examples of compact complex manifolds which are not necessarily Kähler, but for which the -lemma holds.
Conjectures
J.-P. Demailly and M. Pǎun have
shown that a manifold is in Fujiki class if and only
if it supports a Kähler current.
They also conjectured that a manifold M is in Fujiki class if it admits a nef current which is big, that is, satisfies
For a cohomology class which is rational, this statement is known: by Grauert-Riemenschneider conjecture, a holomorphic line bundle L with first Chern class
nef and big has maximal Kodaira dimension, hence the corresponding rational map to
is generically finite onto its image, which is algebraic, and therefore Kähler.
Fujiki and Ueno asked whether the property is stable under deformations. This conjecture was disproven in 1992 by Y.-S. Poon and Claude LeBrun |
https://en.wikipedia.org/wiki/Atransferrinemia | Atransferrinemia is an autosomal recessive metabolic disorder in which there is an absence of transferrin, a plasma protein that transports iron through the blood.
Atransferrinemia is characterized by anemia and hemosiderosis in the heart and liver. The iron damage to the heart can lead to heart failure. The anemia is typically microcytic and hypochromic (the red blood cells are abnormally small and pale). Atransferrinemia was first described in 1961 and is extremely rare, with only ten documented cases worldwide.
Symptoms and signs
The presentation of this disorder entails anemia, arthritis, hepatic anomalies, and recurrent infections are clinical signs of the disease. Iron overload occurs mainly in the liver, heart, pancreas, thyroid, and kidney.
Genetics
In terms of genetics of atransferrinemia researchers have identified mutations in the TF gene as a probable cause of this genetic disorder in affected people.
Transferrin is a serum transport protein that transports iron to the reticuloendothelial system for utilization and erythropoiesis, since there is no transferrin in atransferrinemia, serum free iron cannot reach reticuloendothelial cells and there is microcytic anemia. Also, this excess iron deposits itself in the heart, liver and joints, and causes damage. Ferritin, the storage form of iron gets secreted more into the bloodstream so as to bind with the excessive free iron and hence serum ferritin levels rise in this condition
Diagnosis
The diagnosis of atransferrinemia is done via the following means to ascertain if an individual has the condition:
Blood test(for anemia)
TF level
Physical exam
Genetic test
Types
There are two forms of this condition that causes an absence of transferrin in the affected individual:
Acquired atransferrinemia
Congenital atransferrinemia
Treatment
The treatment of atransferrinemia is apotransferrin. The missing protein without iron. Iron treatment is detrimental as it does not correct the anemia and is a cause o |
https://en.wikipedia.org/wiki/Clitocybe%20dealbata | Clitocybe dealbata, also known as the ivory funnel, is a small white funnel-shaped basidiomycete fungus widely found in lawns, meadows and other grassy areas in Europe and North America. Also known as the sweating mushroom, or sweat producing clitocybe, it derives these names from the symptoms of poisoning. It contains potentially deadly levels of muscarine.
Taxonomy and naming
Clitocybe dealbata was initially described by British naturalist James Sowerby in 1799 as Agaricus dealbatus, its specific epithet derived from the Late Latin verb dealbare 'to whitewash', inexorably calling to mind the Biblical "whited sepulchre", that is outwardly pleasing but inwardly toxic. It gained its current genus name in 1874 when reclassified by French naturalist Claude Casimir Gillet. However, this species is often considered a synonym of Clitocybe rivulosa and according to Bon the name C. dealbata may be invalid (a nomen dubium) as James Sowerby's definition conflicts with Elias Magnus Fries's.
Description
A small white or white dusted with buff-coloured mushroom, the 2–4 cm diameter cap is flattened to depressed with adnate to decurrent crowded white gills. The stipe is 2–4 cm tall and 0.5–1 cm wide. The spore print is white. There is no distinctive taste or smell.
It is one of a number of similar poisonous species such as the false champignon (Clitocybe rivulosa) which can be confused with the edible fairy ring champignon (Marasmius oreades), or miller (Clitopilus prunulus).
Distribution and habitat
The ivory funnel is found in grassy habitats in summer and autumn. Often gregarious, it can form fairy rings. Unfortunately, they often occur in grassy areas where they may be encountered by children or pets. This may increase risk of accidental consumption.
Toxicity
The main toxic component of Clitocybe dealbata is muscarine, and thus the symptoms are like those of nerve agent poisoning, namely greatly increased salivation, sweating (perspiration), and the flow of tears (lacr |
https://en.wikipedia.org/wiki/PHD%20finger | The PHD finger was discovered in 1993 as a Cys4-His-Cys3 motif in the plant homeodomain (hence PHD) proteins HAT3.1 in Arabidopsis and maize ZmHox1a.
The PHD zinc finger motif resembles the metal binding RING domain (Cys3-His-Cys4) and FYVE domain. It occurs as a single finger, but often in clusters of two or three, and it also occurs together with other domains, such as the chromodomain and the bromodomain.
Role in epigenetics
The PHD finger, approximately 50-80 amino acids in length, is found in more than 100 human proteins. Several of the proteins it occurs in are found in the nucleus, and are involved in chromatin-mediated gene regulation. The PHD finger occurs in proteins such as the transcriptional co-activators p300 and CBP, Polycomb-like protein (Pcl), Trithorax-group proteins like ASH1L, ASH2L and MLL, the autoimmune regulator (AIRE), Mi-2 complex (part of histone deacetylase complex), the co-repressor TIF1, the JARID1-family of demethylases and many more.
Structure
The NMR structure of the PHD finger from human WSTF (Williams Syndrome Transcription Factor) shows that the conserved cysteines and histidine coordinate two Zn2+ ions. In general, the PHD finger adopts a globular fold, consisting of a two-stranded beta-sheet and an alpha-helix. The region consisting of these secondary structures and the residues involved in coordinating the zinc-ions are very conserved among species. The loop regions I and II are variable and could contribute functional specificity to the different PHD fingers.
Function
The PHD fingers of some proteins, including ING2, YNG1 and NURF, have been reported to bind to histone H3 tri-methylated on lysine 4 (H3K4me3), while other PHD fingers have tested negative in such assays. A protein called KDM5C has a PHD finger, which has been reported to bind histone H3 tri-methylated lysine 9 (H3K9me3). Based on these publications, binding to tri-methylated lysines on histones may therefore be a property widespread among PHD fingers. Do |
https://en.wikipedia.org/wiki/Expression%20language | An expression language is a language for creating a computer-interpretable representation of specific knowledge and may refer to:
Advanced Boolean Expression Language, an obsolete hardware description language for hardware descriptions
Data Analysis Expressions (DAX), an expression language developed by Microsoft and used in Power Pivot, among other places
Jakarta Expression Language, a domain-specific language used in Jakarta EE web applications. Formerly known as "Unified Expression Language", "Expression Language" or just "the Expression Language").
Rights Expression Languages, machine processable language used for representing immaterial rights such as copyright and license information |
https://en.wikipedia.org/wiki/Multiplex%20ligation-dependent%20probe%20amplification | Multiplex ligation-dependent probe amplification (MLPA) is a variation of the multiplex polymerase chain reaction that permits amplification of multiple targets with only a single primer pair. It detects copy number changes at the molecular level, and software programs are used for analysis. Identification of deletions or duplications can indicate pathogenic mutations, thus MLPA is an important diagnostic tool used in clinical pathology laboratories worldwide.
History
Multiplex ligation-dependent probe amplification was invented by Jan Schouten, a Dutch scientist. The method was first described in 2002 in the scientific journal Nucleic Acid Research. The first applications included the detection of exon deletions in the human genes BRCA1, MSH2 and MLH1, which are linked to hereditary breast and colon cancer. Now MLPA is used to detect hundreds of hereditary disorders, as well as for tumour profiling.
Description
MLPA quantifies the presence of particular sequences in a sample of DNA, using a specially designed probe pair for each target sequence of interest. The process consists of multiple steps:
The sample DNA is denatured, resulting in single-stranded sample DNA.
Pairs of probes are hybridized to the sample DNA, with each probe pair designed to query for the presence of a particular DNA sequence.
Ligase is applied to the hybridized DNA, combining probe pairs that are hybridized immediately next to each other into a single strand of DNA that can be amplified by PCR.
PCR amplifies all probe pairs that have been successfully ligated, using fluorescently labeled PCR primers.
The PCR products are quantified, typically by (capillary) electrophoresis.
Each probe pair consists of two oligonucleotides, with sequence that recognizes adjacent sites of the target DNA, a PCR priming site, and optionally a "stuffer" to give the PCR product a unique length when compared to other probe pairs in the MLPA assay. Each complete probe pair must have a unique length, so t |
https://en.wikipedia.org/wiki/Bethlem%20myopathy | Bethlem myopathy is predominantly an autosomal dominant myopathy, classified as a congenital form of muscular dystrophy. There are two types of Bethlem myopathy, based on which type of collagen is affected.
Bethlem myopathy 1 (BTHLM1), formerly known as Limb-girdle muscular dystrophy 5 (LGMDD5), is caused by a mutation in one of the three genes coding for type VI collagen. These include COL6A1, COL6A2, and COL6A3. It is autosomal dominant, though uncommonly can be autosomal recessive.
Bethlem myopathy 2 (BTHLM2), formerly known as myopathic-type Ehlers-Danlos syndrome, is caused by a mutation on the COL12A1 gene coding for type XII collagen. It is autosomal dominant.
Gowers's sign, toe walking, multiple contractures of the joints (especially the fingers: 'Bethlem sign'), skin abnormalities, and muscle weakness (proximal more than distal) are typical signs and symptoms of the disease. Initially, in early childhood, there may also be joint laxity. There is no cardiac involvement in either Bethlem myopathy 1 or 2, which helps to differentiate it from Emery-Dreifuss muscular dystrophy. Currently there is no cure for the disease and symptomatic treatment is used to relieve symptoms and improve quality of life.
Bethlem myopathy may be diagnosed based on clinical examinations and laboratory tests may be recommended. Genetic testing for known pathological variants is preferred. In the case of a VUS, testing of dermal fibroblast culture is used for an accurate diagnosis.
Bethlem myopathy 1 is a rare disease, affecting about 1 in 200,000 people. Bethlem myopathy 2 is an ultra-rare disease, affecting less than 1 in 1,000,000 people.
The condition was described by J. Bethlem and G. K. van Wijngaarden in 1976.
Signs and symptoms
Bethlem myopathy is a slowly progressive muscle disease characterized predominantly by contractures, rigidity of the spine, skin abnormalities and proximal muscle weakness. Symptoms may present as early as infancy, with typical contractures and h |
https://en.wikipedia.org/wiki/ADAM17 | A disintegrin and metalloprotease 17 (ADAM17), also called TACE (tumor necrosis factor-α-converting enzyme), is a 70-kDa enzyme that belongs to the ADAM protein family of disintegrins and metalloproteases.
Chemical characteristics
ADAM17 is an 824-amino acid polypeptide.
Function
ADAM17 is understood to be involved in the processing of tumor necrosis factor alpha (TNF-α) at the surface of the cell, and from within the intracellular membranes of the trans-Golgi network. This process, which is also known as 'shedding', involves the cleavage and release of a soluble ectodomain from membrane-bound pro-proteins (such as pro-TNF-α), and is of known physiological importance. ADAM17 was the first 'sheddase' to be identified, and is also understood to play a role in the release of a diverse variety of membrane-anchored cytokines, cell adhesion molecules, receptors, ligands, and enzymes.
Cloning of the TNF-α gene revealed it to encode a 26 kDa type II transmembrane pro-polypeptide that becomes inserted into the cell membrane during its maturation. At the cell surface, pro-TNF-α is biologically active, and is able to induce immune responses via juxtacrine intercellular signaling. However, pro-TNF-α can undergo a proteolytic cleavage at its Ala76-Val77 amide bond, which releases a soluble 17kDa extracellular domain (ectodomain) from the pro-TNF-α molecule. This soluble ectodomain is the cytokine commonly known as TNF-α, which is of pivotal importance in paracrine signaling. This proteolytic liberation of soluble TNF-α is catalyzed by ADAM17.
Recently, ADAM17 was discovered as a crucial mediator of resistance to radiotherapy. Radiotherapy can induce a dose-dependent increase of furin-mediated cleavage of the ADAM17 proform to active ADAM17, which results in enhanced ADAM17 activity in vitro and in vivo. It was also shown that radiotherapy activates ADAM17 in non-small cell lung cancer, which results in shedding of multiple survival factors, growth factor pathway activati |
https://en.wikipedia.org/wiki/Molar%20conductivity | The molar conductivity of an electrolyte solution is defined as its conductivity divided by its molar concentration.
where:
κ is the measured conductivity (formerly known as specific conductance),
c is the molar concentration of the electrolyte.
The SI unit of molar conductivity is siemens metres squared per mole (S m2 mol−1). However, values are often quoted in S cm2 mol−1. In these last units, the value of Λm may be understood as the conductance of a volume of solution between parallel plate electrodes one centimeter apart and of sufficient area so that the solution contains exactly one mole of electrolyte.
Variation of molar conductivity with dilution
There are two types of electrolytes: strong and weak. Strong electrolytes usually undergo complete ionization, and therefore they have higher conductivity than weak electrolytes, which undergo only partial ionization. For strong electrolytes, such as salts, strong acids and strong bases, the molar conductivity depends only weakly on concentration. On dilution there is a regular increase in the molar conductivity of strong electrolyte, due to the decrease in solute–solute interaction. Based on experimental data Friedrich Kohlrausch (around the year 1900) proposed the non-linear law for strong electrolytes:
where
Λ is the molar conductivity at infinite dilution (or limiting molar conductivity), which can be determined by extrapolation of Λm as a function of ,
K is the Kohlrausch coefficient, which depends mainly on the stoichiometry of the specific salt in solution,
α is the dissociation degree even for strong concentrated electrolytes,
fλ is the lambda factor for concentrated solutions.
This law is valid for low electrolyte concentrations only; it fits into the Debye–Hückel–Onsager equation.
For weak electrolytes (i.e. incompletely dissociated electrolytes), however, the molar conductivity strongly depends on concentration: The more dilute a solution, the greater its molar conductivity, due to incr |
https://en.wikipedia.org/wiki/Philo%20line | In geometry, the Philo line is a line segment defined from an angle and a point inside the angle as the shortest line segment through the point that has its endpoints on the two sides of the angle. Also known as the Philon line, it is named after Philo of Byzantium, a Greek writer on mechanical devices, who lived probably during the 1st or 2nd century BC. Philo used the line to double the cube; because doubling the cube cannot be done by a straightedge and compass construction, neither can constructing the Philo line.
Geometric characterization
The defining point of a Philo line, and the base of a perpendicular from the apex of the angle to the line, are equidistant from the endpoints of the line.
That is, suppose that segment is the Philo line for point and angle , and let be the base of a perpendicular line to . Then and .
Conversely, if and are any two points equidistant from the ends of a line segment , and if is any point on the line through that is perpendicular to , then is the Philo line for angle and point .
Algebraic Construction
A suitable fixation of the line given the directions from to and from to and the location of in that infinite triangle is obtained by the following algebra:
The point is put into the center of the coordinate system, the direction from to defines the horizontal -coordinate, and the direction from to defines the line with the equation in the rectilinear coordinate system. is the tangent of the angle in the triangle . Then has the Cartesian Coordinates and the task is to find on the horizontal axis and on the other side of the triangle.
The equation of a bundle of lines with inclinations that
run through the point is
These lines intersect the horizontal axis at
which has the solution
These lines intersect the opposite side at
which has the solution
The squared Euclidean distance between the intersections of the horizontal line
and the diagonal is
The Philo Line is defined by the |
https://en.wikipedia.org/wiki/Collaborative%20Study%20on%20the%20Genetics%20of%20Alcoholism | The Collaborative Studies on the Genetics of Alcoholism (COGA) is an eleven-center research project in the United States designed to understand the genetic basis of alcoholism. Research is conducted at University of Connecticut, Indiana University, University of Iowa, SUNY Downstate Medical Center, Washington University in St. Louis, University of California at San Diego, Rutgers University, University of Texas Health Science Center at San Antonio, Virginia Commonwealth University, Icahn School of Medicine at Mount Sinai, and Howard University.
Henri Begleiter and Theodore Reich were founding PI and Co-PI of COGA. Since 1991, COGA has interviewed more than 17,000 members of more than 2,200 families from around the United States, many of whom have been longitudinally assessed. Family members, including adults, children, and adolescents, have been carefully characterized across a variety of domains, including other alcohol and other substance-related phenotypes, co-occurring disorders (e.g., depression), electrophysiology, key precursor behavioral phenotypes (e.g., conduct disorder), and environmental risk factors (e.g., stress). This has provided us with a very rich phenotypic dataset to complement the large repository of cell lines and DNA for current and future studies. We have made this dataset widely available to advance the field: hundreds of researchers have worked with data generated as part of COGA through a variety of different mechanisms including data sharing through and the Genetic Analysis Workshops, as COGA collaborators, through meta-analysis consortia including the Psychiatric Genomics Consortium, and as independent requestors for COGA samples and data.
In studying alcoholism, COGA hopes to find better ways of treating alcoholism and improving the lives of the millions of people who suffer from alcoholism. The COGA project has achieved national and international acclaim for its accomplishments, and numerous articles about the study have been publi |
https://en.wikipedia.org/wiki/Muenke%20syndrome | Muenke syndrome, also known as FGFR3-related craniosynostosis, is a human specific condition characterized by the premature closure of certain bones of the skull during development, which affects the shape of the head and face. First described by Maximilian Muenke, the syndrome occurs in about 1 in 30,000 newborns. This condition accounts for an estimated 8 percent of all cases of craniosynostosis.
Signs and symptoms
Many people with this disorder have a premature fusion of skull bones along the coronal suture. Not every case has had craniosynostosis however. Other parts of the skull may be malformed as well. This will usually cause an abnormally shaped head, wide-set eyes, low set ears and flattened cheekbones in these patients. About 5 percent of affected individuals have an enlarged head (macrocephaly). There may also be associated hearing loss in 10–33% of cases and it is important for affected individuals to have hearing tests to check on the possibility of a problem. They can lose about 33-100% of hearing.
Most people with this condition have normal intellect, but developmental delay and learning disabilities are possible. The signs and symptoms of Muenke syndrome vary among affected people, and some findings overlap with those seen in other craniosynostosis syndromes. Between 6 percent and 7 percent of people with the gene mutation associated with Muenke syndrome do not have any of the characteristic features of the disorder.
Other Implications of Muenke Syndrome
Apart from craniosynostosis, it has been suggested that hearing loss, and learning difficulties are common in Muenke syndrome. According to Ulster Medical Journal, most individuals with Muenke syndrome may have limb findings. The most common ocular finding in Muenke syndrome is strabismus as studied by Dr. Agochukwu-Nwubah and her researching team.
Approximately 20% of people with Muenke Syndrome will also be affected by epilepsy.
Causes
Muenke syndrome is caused by a specific gene mutation in |
https://en.wikipedia.org/wiki/BPIFA1 | BPI fold containing family A, member 1 (BPIFA1), also known as Palate, lung, and nasal epithelium clone (PLUNC), is a protein that in humans is encoded by the BPIFA1 gene. It was also formerly known as "Secretory protein in upper respiratory tracts" (SPURT). The BPIFA1 gene sequence predicts 4 transcripts (splice variants); 3 mRNA variants have been well characterized. The resulting BPIFA1 is a secreted protein, expressed at very high levels in mucosa of the airways (olfactory and respiratory and epithelium) and salivary glands; at high levels in oropharyneal epithelium, including tongue and tonsils; and at moderate levels many other tissue types and glands including pituitary, testis, lung, bladder, blood, prostate, pancreas, levels in the digestive tract (tongue, stomach, intestinal epithelium) and pancreas. The protein can be detected on the apical side of epithelial cells and in airway surface liquid, nasal mucus, and sputum.
Superfamily
BPIFA1 is a member of a BPI fold protein superfamily defined by the presence of the bactericidal/permeability-increasing protein fold (BPI fold) which is formed by two similar domains in a "boomerang" shape. This superfamily is also known as the BPI/LBP/PLUNC family or the BPI/LPB/CETP family. The BPI fold creates apolar binding pockets that can interact with hydrophobic and amphipathic molecules, such as the acyl carbon chains of lipopolysaccharide found on Gram-negative bacteria, but members of this family may have many other functions.
Genes for the BPI/LBP/PLUNC superfamily are found in all vertebrate species, including distant homologs in non-vertebrate species such as insects, mollusks, and roundworms. Within that broad grouping is the BPIF gene family whose members encode the BPI fold structural motif and are found clustered on a single chromosome, e.g., Chromosome 20 in humans, Chromosome 2 in mouse, Chromosome 3 in rat, Chromosome 17 in pig, Chromosome 13 in cow. The BPIF gene family is split into two groupings, B |
https://en.wikipedia.org/wiki/Friedlander%E2%80%93Iwaniec%20theorem | In analytic number theory the Friedlander–Iwaniec theorem states that there are infinitely many prime numbers of the form . The first few such primes are
2, 5, 17, 37, 41, 97, 101, 137, 181, 197, 241, 257, 277, 281, 337, 401, 457, 577, 617, 641, 661, 677, 757, 769, 821, 857, 881, 977, … .
The difficulty in this statement lies in the very sparse nature of this sequence: the number of integers of the form less than is roughly of the order .
History
The theorem was proved in 1997 by John Friedlander and Henryk Iwaniec. Iwaniec was awarded the 2001 Ostrowski Prize in part for his contributions to this work.
Refinements
The theorem was refined by D.R. Heath-Brown and Xiannan Li in 2017. In particular, they proved that the polynomial represents infinitely many primes when the variable is also required to be prime. Namely, if is the prime numbers less than in the form then
where
Special case
When , the Friedlander–Iwaniec primes have the form , forming the set
2, 5, 17, 37, 101, 197, 257, 401, 577, 677, 1297, 1601, 2917, 3137, 4357, 5477, 7057, 8101, 8837, 12101, 13457, 14401, 15377, … .
It is conjectured (one of Landau's problems) that this set is infinite. However, this is not implied by the Friedlander–Iwaniec theorem. |
https://en.wikipedia.org/wiki/Calf-intestinal%20alkaline%20phosphatase | Calf-intestinal alkaline phosphatase (CIAP/CIP) is a type of alkaline phosphatase that catalyzes the removal of phosphate groups from the 5' end of DNA strands and phosphomonoesters from RNA. This enzyme is frequently used in DNA sub-cloning, as DNA fragments that lack the 5' phosphate groups cannot ligate. This prevents recircularization of the linearized DNA vector and improves the yield of the vector containing the appropriate insert.
Applications
Calf-intestinal alkaline phosphatase can serve as an effective tool for removing uranium from groundwater and soil that can pose major health risks. Furthermore, the toxicity of lipopolysaccharide (LPS) was mitigated by calf-intestinal alkaline phosphatase in mice and piglets, which indicates that it could be a promising new therapeutic agent for treating diseases associated with LPS. |
https://en.wikipedia.org/wiki/Flag%20%28geometry%29 | In (polyhedral) geometry, a flag is a sequence of faces of a polytope, each contained in the next, with exactly one face from each dimension.
More formally, a flag of an -polytope is a set such that and there is precisely one in for each , Since, however, the minimal face and the maximal face must be in every flag, they are often omitted from the list of faces, as a shorthand. These latter two are called improper faces.
For example, a flag of a polyhedron comprises one vertex, one edge incident to that vertex, and one polygonal face incident to both, plus the two improper faces.
A polytope may be regarded as regular if, and only if, its symmetry group is transitive on its flags. This definition excludes chiral polytopes.
Incidence geometry
In the more abstract setting of incidence geometry, which is a set having a symmetric and reflexive relation called incidence defined on its elements, a flag is a set of elements that are mutually incident. This level of abstraction generalizes both the polyhedral concept given above as well as the related flag concept from linear algebra.
A flag is maximal if it is not contained in a larger flag. An incidence geometry (Ω, ) has rank if Ω can be partitioned into sets Ω1, Ω2, ..., Ω, such that each maximal flag of the geometry intersects each of these sets in exactly one element. In this case, the elements of set Ω are called elements of type .
Consequently, in a geometry of rank , each maximal flag has exactly elements.
An incidence geometry of rank 2 is commonly called an incidence structure with elements of type 1 called points and elements of type 2 called blocks (or lines in some situations). More formally,
An incidence structure is a triple D = (V, B, ) where V and B are any two disjoint sets and is a binary relation between V and B, that is, ⊆ V × B. The elements of V will be called points, those of B blocks and those of flags.
Notes |
https://en.wikipedia.org/wiki/Commercial%20Processing%20Workload | The Commercial Processing Workload (CPW) is a simplified variant of the industry-wide TPC-C benchmarking standard originally developed by IBM to compare the performance of their various AS/400 (now IBM i) server offerings.
The related, but less commonly used Computational Intensive Workload (CIW) measures performance in a situation where there is a high ratio of computation to input/output communication. The reverse situation is simulated by the CPW. |
https://en.wikipedia.org/wiki/Silver%20Y | The silver Y (Autographa gamma) is a migratory moth of the family Noctuidae which is named for the silvery Y-shaped mark on each of its forewings.
Description
The silver Y is a medium-sized moth with a wingspan of 30 to 45 mm. The wings are intricately patterned with various shades of brown and grey providing excellent camouflage. In the centre of each forewing there is a silver-coloured mark shaped like a letter Y or a (lower case) Greek letter Gamma. There are several different forms with varying colours depending on the climate in which the larvae grow.
Technical description and variation
P. gamma Forewing purplish grey, with darker suffusion in places; the lines pale silvery edged on both sides with dark fuscous, the outer line indented on vein 2 and submedian fold, as in circumflexa; the oblique orbicular and the reniform conversely oblique and constricted in middle, both edged with silvery: the median area below middle blackish, containing a silvery gamma; the subterminal dentate and indented, preceded by a darker shade; hindwing brownish grey with darker veins and a broad blackish terminal border: aberrations due to difference in ground colour are ab. pallida Tutt, in which the ground colour is whitish grey, with the markings appearing darker and more sharply defined; ab. rufescens Tutt where it is yellowish red, with the gamma mark pale golden, also the lines and edges of stigmata, and the whole underside reddish: and ab. nigricans Spul., in which the whole forewing up to the pale terminal space is violet black brown; in the ab. purpurissa ab. nov. (65a) the ground colour is deep olive brown; the inner and outer lines violet, the latter double; subterminal line lustrous violet, irregularly waved and below the middle forming a strong W-shaped mark; the gamma mark is pale golden, and the edges of the dark stigmata are, like the inner line, finely lustrous; a pale violet terminal stripe before termen; hindwing bronzy brownish, with broad dark terminal border |
https://en.wikipedia.org/wiki/Sophomore%27s%20dream | In mathematics, the sophomore's dream is the pair of identities (especially the first)
discovered in 1697 by Johann Bernoulli.
The numerical values of these constants are approximately 1.291285997... and 0.7834305107..., respectively.
The name "sophomore's dream" is in contrast to the name "freshman's dream" which is given to the incorrect identity The sophomore's dream has a similar too-good-to-be-true feel, but is true.
Proof
The proofs of the two identities are completely analogous, so only the proof of the second is presented here.
The key ingredients of the proof are:
to write (using the notation for the natural logarithm and for the exponential function);
to expand using the power series for ; and
to integrate termwise, using integration by substitution.
In details, can be expanded as
Therefore,
By uniform convergence of the power series, one may interchange summation and integration to yield
To evaluate the above integrals, one may change the variable in the integral via the substitution With this substitution, the bounds of integration are transformed to giving the identity
By Euler's integral identity for the Gamma function, one has
so that
Summing these (and changing indexing so it starts at instead of ) yields the formula.
Historical proof
The original proof, given in Bernoulli, and presented in modernized form in Dunham, differs from the one above in how the termwise integral is computed, but is otherwise the same, omitting technical details to justify steps (such as termwise integration). Rather than integrating by substitution, yielding the Gamma function (which was not yet known), Bernoulli used integration by parts to iteratively compute these terms.
The integration by parts proceeds as follows, varying the two exponents independently to obtain a recursion. An indefinite integral is computed initially, omitting the constant of integration both because this was done historically, and because it drops out when computing |
https://en.wikipedia.org/wiki/Scherpenberg%20mill | The Scherpenberg mill, located in Westmalle, Belgium, is a tower mill that was built in 1843 to grind grain into flour. It is currently owned and operated by the municipal authorities of Malle, the only hours of operation being Sunday from 1:30 P.M. until 5:00 P.M.
History
The mill was built in 1843 by Joannes and Petrus Mullenbrück (alternately found as Meulenbroeck). The Mullenbrücks were the sons of Christianus Mullenbrück, who had come from Westphalia to Westmalle in 1808. Joannes became a miller in Westmalle, and his brother Petrus moved to Ossendrecht in the Netherlands where he also worked as a miller.
The mill was in use until 1961, when it became obsolete. The municipality of Westmalle subsequently purchased the mill in 1962, when Jozef Caers began the first restoration work. After several years or restoration, the mill again became operational in 1985. In 2003, major maintenance work was carried out.
Owners and millers
The owners of the mill are as follows:
Christianus Mullenbrück-Aerts (1843–1853)
Joannes Mullenbrück-Nicolay (1853–1881)
Franciscus Stevens-Mullenbrück (1881–1891)
Franciscus Janssens-Geerts (1891–1930)
Gustaaf Janssens-Van de Mierop, and his widow (1930–1954)
Magdalena and Laura Huyskens-Janssens (1956–1962)
Municipal authorities of Westmalle (1962–1977)
Municipal authorities of Malle (as of 1977)
The millers are as follows, please note that some owners also acted as miller:
Joannes Mullenbrück-Nicolay (1843–1881)
Franciscus Stevens-Mullenbrück (1881–1891)
Jan Verschueren (1891–1927)
Frans Boekx (1927–1954)
Jozef Boekx (1954–1962)
Municipal authorities of Westmalle (1962–1977)
Municipal authorities of Malle (as of 1977)
See also
Molinology
Windmill
Tower mills
Grinding mills
Tourist attractions in Antwerp Province
Buildings and structures in Antwerp Province
Windmills in Belgium
Windmills completed in 1843
1843 establishments in Belgium
Malle |
https://en.wikipedia.org/wiki/Pasteur%20effect | The Pasteur effect describes how available oxygen inhibits ethanol fermentation, driving yeast to switch toward aerobic respiration for increased generation of the energy carrier adenosine triphosphate (ATP). More generally, in the medical literature, the Pasteur effect refers to how the cellular presence of oxygen causes in cells a decrease in the rate of glycolysis and also a suppression of lactate accumulation. The effect occurs in animal tissues, as well as in microorganisms belonging to the fungal kingdom.
Discovery
The effect was described by Louis Pasteur in 1857 in experiments showing that aeration of yeasted broth causes cell growth to increase while the fermentation rate decreases, based on lowered ethanol production.
Explanation
Yeast fungi, being facultative anaerobes, can either produce energy through ethanol fermentation or aerobic respiration. When the O2 concentration is low, the two pyruvate molecules formed through glycolysis are each fermented into ethanol and carbon dioxide. While only 2 ATP are produced per glucose, this method is utilized under anaerobic conditions because it oxidizes the electron shuttle NADH into NAD+ for another round of glycolysis and ethanol fermentation.
If the concentration of oxygen increases, pyruvate is instead converted to acetyl CoA, used in the citric acid cycle, and undergoes oxidative phosphorylation. Per glucose, 10 NADH and 2 FADH2 are produced in cellular respiration for a significant amount of proton pumping to produce a proton gradient utilized by ATP Synthase. While the exact ATP output ranges based on considerations like the overall electrochemical gradient, aerobic respiration produces far more ATP than the anaerobic process of ethanol fermentation. The increased ATP and citrate from aerobic respiration allosterically inhibit the glycolysis enzyme phosphofructokinase 1 because less pyruvate is needed to produce the same amount of ATP.
Despite this energetic incentive, Rosario Lagunas has shown that ye |
https://en.wikipedia.org/wiki/Zalman%20Usiskin | Zalman Usiskin is an educator best known as the Director of the University of Chicago School Mathematics Project.
He was born to Nathan and Esther Usiskin.
A faculty member since 1969, he also has taught junior and senior high-school mathematics and has authored and co-authored many textbooks, including a six-volume series used as part of the University School Mathematics Project secondary curriculum. In recognition of his work, he has received a Lifetime Achievement Award from the National Council of Teachers of Mathematics.
Usiskin's doctoral dissertation in mathematical education at the University of Michigan involved the field testing of his book, Geometry: A Transformation Approach, which was written with Arthur Coxford. This book has greatly influenced the way geometry is taught in many American high schools, according to the NCTM citation. With the founding of UCSMP in 1983, he became Director of the secondary component and has been the project's overall Director since 1987. The University School Mathematics Project has grown to become the nation's largest university-based curriculum project for kindergarten through 12th-grade mathematics, with several million students using its elementary and secondary textbooks and other materials. |
https://en.wikipedia.org/wiki/Jackson%27s%20inequality | In approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by algebraic or trigonometric polynomials in terms of the modulus of continuity or modulus of smoothness of the function or of its derivatives. Informally speaking, the smoother the function is, the better it can be approximated by polynomials.
Statement: trigonometric polynomials
For trigonometric polynomials, the following was proved by Dunham Jackson:
Theorem 1: If is an times differentiable periodic function such that
then, for every positive integer , there exists a trigonometric polynomial of degree at most such that
where depends only on .
The Akhiezer–Krein–Favard theorem gives the sharp value of (called the Akhiezer–Krein–Favard constant):
Jackson also proved the following generalisation of Theorem 1:
Theorem 2: One can find a trigonometric polynomial of degree such that
where denotes the modulus of continuity of function with the step
An even more general result of four authors can be formulated as the following Jackson theorem.
Theorem 3: For every natural number , if is -periodic continuous function, there exists a trigonometric polynomial of degree such that
where constant depends on and is the -th order modulus of smoothness.
For this result was proved by Dunham Jackson. Antoni Zygmund proved the inequality in the case when in 1945. Naum Akhiezer proved the theorem in the case in 1956. For this result was established by Sergey Stechkin in 1967.
Further remarks
Generalisations and extensions are called Jackson-type theorems. A converse to Jackson's inequality is given by Bernstein's theorem. See also constructive function theory. |
https://en.wikipedia.org/wiki/Polymeric%20liquid%20crystal | Polymeric liquid crystals are similar to monomeric liquid crystals used in displays. Both have dielectric anitroscopy, or the ability to change directions and absorb or transmit light depending on electric fields. Polymeric liquid crystals form long head-to-tail or side chain polymers, which are woven in thick mats and therefore have high viscosities. The high viscosities allow the polymeric liquid crystals to be used in complex structures, but they are harder to align, limiting their usefulness. The polymerics align in microdomains facing all different directions, which ruins the optical effect. One solution to this is to mix in a small amount of photo-curing polymer, which when spin-coated onto a surface can be hardened. Basically, the polymeric liquid crystal and photocurer are aligned in one direction, and then the photo curer is cured, "freezing" the polymeric in one direction. |
https://en.wikipedia.org/wiki/Wake%20Forest%20Institute%20for%20Regenerative%20Medicine | The Wake Forest Institute for Regenerative Medicine (WFIRM) is a research institute affiliated with Wake Forest School of Medicine and located in Winston-Salem, North Carolina, United States
WFIRM's goal is to apply the principles of regenerative medicine to repair or replace diseased tissues and organs. Among other goals, WFIRM scientists are looking for ways to create insulin-producing cells in the laboratory, engineered blood vessels for heart bypass surgery and treat knee injuries through regenerated meniscus tissues. WFIRM has also led two federal initiatives to regenerate tissues from battlefield injuries (AFIRM I and AFIRM II), with a combined funding of $160 million from the U.S. Department of Defense. WFIRM is working to develop more than 40 different organs and tissues in the laboratory.
Anthony Atala, M.D., is the director of the institute, which is located in Wake Forest Innovation Quarter in downtown Winston-Salem. Atala was recruited by Wake Forest Baptist Medical Center in 2004, and brought many of his team members from the Laboratory for Tissue Engineering and Cellular Therapeutics at the Children's Hospital Boston and Harvard Medical School. Notable achievements announced at WFIRM have been the first lab-grown organ, a urinary bladder. The artificial urinary bladder was the first to be implanted into a human. WFIRM research also discovered stem cells harvested from the amniotic fluid of pregnant women. These stems cells are pluripotent, meaning that they can be manipulated to differentiate into various types of mature cells that make up nerve, muscle, bone, and other tissues while avoiding the problems of tumor formation and ethical concerns that are associated with embryonic stem cells. Research at WFIRM was also essential towards developing the field of bioprinting. This was first accomplished by converting a Hewlett Packard paper and ink printer to deposit cells, which is now on display at the National Museum of Health and Medicine. Later, the |
https://en.wikipedia.org/wiki/Constraint%20%28computational%20chemistry%29 | In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The general steps involved are: (i) choose novel unconstrained coordinates (internal coordinates), (ii) introduce explicit constraint forces, (iii) minimize constraint forces implicitly by the technique of Lagrange multipliers or projection methods.
Constraint algorithms are often applied to molecular dynamics simulations. Although such simulations are sometimes performed using internal coordinates that automatically satisfy the bond-length, bond-angle and torsion-angle constraints, simulations may also be performed using explicit or implicit constraint forces for these three constraints. However, explicit constraint forces give rise to inefficiency; more computational power is required to get a trajectory of a given length. Therefore, internal coordinates and implicit-force constraint solvers are generally preferred.
Constraint algorithms achieve computational efficiency by neglecting motion along some degrees of freedom. For instance, in atomistic molecular dynamics, typically the length of covalent bonds to hydrogen are constrained; however, constraint algorithms should not be used if vibrations along these degrees of freedom are important for the phenomenon being studied.
Mathematical background
The motion of a set of N particles can be described by a set of second-order ordinary differential equations, Newton's second law, which can be written in matrix form
where M is a mass matrix and q is the vector of generalized coordinates that describe the particles' positions. For example, the vector q may be a 3N Cartesian coordinates of the particle positions rk, where k runs from 1 to N; in the absence of constraints, M would be the 3Nx3N diagonal square matrix of the particle masses. The vector f represents the generaliz |
https://en.wikipedia.org/wiki/Core%20router | A core router is a router designed to operate in the Internet backbone, or core. To fulfill this role, a router must be able to support multiple telecommunications interfaces of the highest speed in use in the core Internet and must be able to forward IP packets at full speed on all of them. It must also support the routing protocols being used in the core. A core router is distinct from an edge router: edge routers sit at the edge of a backbone network and connect to core routers.
History
Like the term "supercomputer", the term "core router" refers to the largest and most capable routers of the then-current generation. A router that was a core router when introduced would likely not be a core router ten years later. Although the local area NPL network was using line speeds of 768 kbit/s from 1967, at the inception of the ARPANET (the Internet's predecessor) in 1969, the fastest links were 56 kbit/s. A given routing node had at most six links. The "core router" was a dedicated minicomputer called an IMP Interface Message Processor. Link speeds increased steadily, requiring progressively more powerful routers until the mid-1990s, when the typical core link speed reached 155 Mbit/s. At that time, several breakthroughs in fiber optic telecommunications (notably DWDM and EDFA) technologies combined to lower bandwidth costs that in turn drove a sudden dramatic increase in core link speeds: by 2000, a core link operated at 2.5 Gbit/s and core Internet companies were planning for 10 Gbit/s speeds.
The largest provider of core routers in the 1990s was Cisco Systems, who provided core routers as part of a broad product line. Juniper Networks entered the business in 1996, focusing primarily on core routers and addressing the need for a radical increase in routing capability that was driven by the increased link speed. In addition, several new companies attempted to develop new core routers in the late 1990s. It was during this period that the term "core router" came into |
https://en.wikipedia.org/wiki/Proxy%20list | A proxy list is a list of open HTTP/HTTPS/SOCKS proxy servers all on one website. Proxies allow users to make indirect network connections to other computer network services. Proxy lists include the IP addresses of computers hosting open proxy servers, meaning that these proxy servers are available to anyone on the internet. Proxy lists are often organized by the various proxy protocols the servers use. Many proxy lists index, which can be used without changing browser settings.
Proxy Anonymity Levels
Elite proxies - Such proxies do not change request fields and look like a real browser, and your real IP address is hidden. Server administrators will commonly be fooled into believing that you are not using a proxy.
Anonymous proxies - These proxies do not show a real IP address, however, they do change the request fields, therefore it is very easy to detect that a proxy is being used by log analysis. You are still anonymous, but some server administrators may restrict proxy requests.
Transparent proxies - (not anonymous, simply HTTP) - These change the request fields and they transfer the real IP. Such proxies are not applicable for security or privacy uses while surfing the web, and should only be used for network speed improvement.
SOCKS is a protocol that relays TCP sessions through a firewall host to allow application users transparent access across the firewall. Because the protocol is independent of application protocols, it can be (and has been) used for many different services, such as telnet, FTP, finger, whois, gopher, WWW, etc. Access control can be applied at the beginning of each TCP session; thereafter the server simply relays the data between the client and the application server, incurring minimum processing overhead. Since SOCKS never has to know anything about the application protocol, it should also be easy for it to accommodate applications that use encryption to protect their traffic from nosy snoopers. No information about the client is se |
https://en.wikipedia.org/wiki/Chromodomain | A chromodomain (chromatin organization modifier) is a protein structural domain of about 40–50 amino acid residues commonly found in proteins associated with the remodeling and manipulation of chromatin. The domain is highly conserved among both plants and animals, and is represented in a large number of different proteins in many genomes, such as that of the mouse. Some chromodomain-containing genes have multiple alternative splicing isoforms that omit the chromodomain entirely. In mammals, chromodomain-containing proteins are responsible for aspects of gene regulation related to chromatin remodeling and formation of heterochromatin regions. Chromodomain-containing proteins also bind methylated histones and appear in the RNA-induced transcriptional silencing complex. In histone modifications, chromodomains are very conserved. They function by identifying and binding to methylated lysine residues that exist on the surface of chromatin proteins and thereby regulate gene transcription.
See also
Bromodomain
Chromo shadow domain |
https://en.wikipedia.org/wiki/Lipofectamine | Lipofectamine or Lipofectamine 2000 is a common transfection reagent, produced and sold by Invitrogen, used in molecular and cellular biology. It is used to increase the transfection efficiency of RNA (including mRNA and siRNA) or plasmid DNA into in vitro cell cultures by lipofection. Lipofectamine contains lipid subunits that can form liposomes in an aqueous environment, which entrap the transfection payload, e.g. DNA plasmids.
Lipofectamine consists of a 3:1 mixture of DOSPA (2,3‐dioleoyloxy‐N‐ [2(sperminecarboxamido)ethyl]‐N,N‐dimethyl‐1‐propaniminium trifluoroacetate) and DOPE, which complexes with negatively charged nucleic acid molecules to allow them to overcome the electrostatic repulsion of the cell membrane. Lipofectamine's cationic lipid molecules are formulated with a neutral co-lipid (helper lipid). The DNA-containing liposomes (positively charged on their surface) can fuse with the negatively charged plasma membrane of living cells, due to the neutral co-lipid mediating fusion of the liposome with the cell membrane, allowing nucleic acid cargo molecules to cross into the cytoplasm for replication or expression.
In order for a cell to express a transgene, the nucleic acid must reach the nucleus of the cell to begin transcription. However, the transfected genetic material may never reach the nucleus in the first place, instead being disrupted somewhere along the delivery process. In dividing cells, the material may reach the nucleus by being trapped in the reassembling nuclear envelope following mitosis. But also in non-dividing cells, research has shown that Lipofectamine improves the efficiency of transfection, which suggests that it additionally helps the transfected genetic material penetrate the intact nuclear envelope.
This method of transfection was invented by Dr. Yongliang Chu.
See also
Lipofection
Transfection
Vectors in gene therapy
Cationic liposome |
https://en.wikipedia.org/wiki/Mobile%20communications%20over%20IP | MoIP, or mobile communications over Internet Protocol, is the mobilization of peer-to-peer communications including chat and talk using Internet Protocol via standard mobile communications applications including 3G, GPRS, Wi-Fi as well as WiMax. Unlike mobile VoIP, MoIP is not a VoIP program made accessible from mobile phones or a switchboard application using VoIP in the background. It is rather a native mobile application on users’ handsets and used to conduct talk and chat over the internet connection as its primary channel.
How MoIP (mobile) works
MoIP applications typically work without any proprietary hardware, are enhanced with real-time contact availability (presence) and save the users money by utilizing free Wi-Fi internet access or fixed internet data plans instead of GSM (talk) minutes. They are completely mobile-centric, designed and optimized specifically for mobile-handsets environment rather than the PC. |
https://en.wikipedia.org/wiki/Equivalent%20dumping%20coefficient | An equivalent dumping coefficient is a mathematical coefficient used in the calculation of the energy dispersed when a structure moves. As a civil engineering term, it defines the percent of a cycle of oscillation that is absorbed (converted to heat by friction) for the structure or sub-structure under analysis. Usually it is assumed that the equivalent dumping coefficient is linear, which is to say invariant compare to oscillatory amplitude. Modern seismic studies have shown this not to be a satisfactory assumption for larger civic structures, and have developed sophisticated amplitude and frequency based functions for equivalent dumping coefficient.
When a building moves, the materials it is made from absorb a fraction of the kinetic energy (this is especially true of concrete) due primarily to friction and to viscous or elastomeric resistance which convert motion or kinetic energy to heat. |
https://en.wikipedia.org/wiki/Technora | Technora is an aramid that is useful for a variety of applications that require high strength or chemical resistance. It is a brand name of the company Teijin Aramid.
Technora was used on January 25, 2004 to suspend the NASA Mars rover Opportunity from its parachute during descent.
It was also later used by NASA as one of the materials, combined with nylon and Kevlar, making up the parachute that was used to perform a braking manoeuvre during atmospheric entry of the Perseverance rover that landed on Mars on February 18, 2021.
Production
Technora is produced by condensation polymerization of terephthaloyl chloride (TCl) with a mixture of p-phenylenediamine (PPD) and 3,4'-diaminodiphenylether (3,4'-ODA). The polymer is closely related to Teijin Aramids's Twaron or DuPont's Kevlar. Technora is derived from two different diamines, 3,4'-ODA and PPD, whereas Twaron is derived from PPD alone. Because only one amide solvent is used in this very straightforward procedure, spinning can be completed immediately after polymer synthesis..
Physical properties
Technora has a better strength to weight ratio than steel. Technora also has fire resistant properties which can be beneficial.
Major industrial uses
Automotive and other industries:
Turbo hoses
high pressure hoses
Timing and V-belts
mechanical rubber goods reinforcement
Linear tension
Optical fiber cables (OFC)
Ram air parachute suspension lines
ropes, wire ropes and cables
Umbilical cables
Electrical mechanical cable (EMC)
Windsurfing sails
Hangglider sails
Drumheads
Personal protective equipment
Poi (performance art)
See also
Vectran |
https://en.wikipedia.org/wiki/Crabtree%20effect | The Crabtree effect, named after the English biochemist Herbert Grace Crabtree, describes the phenomenon whereby the yeast, Saccharomyces cerevisiae, produces ethanol (alcohol) in aerobic conditions at high external glucose concentrations rather than producing biomass via the tricarboxylic acid (TCA) cycle, the usual process occurring aerobically in most yeasts e.g. Kluyveromyces spp. This phenomenon is observed in most species of the Saccharomyces, Schizosaccharomyces, Debaryomyces, Brettanomyces, Torulopsis, Nematospora, and Nadsonia genera. Increasing concentrations of glucose accelerates glycolysis (the breakdown of glucose) which results in the production of appreciable amounts of ATP through substrate-level phosphorylation. This reduces the need of oxidative phosphorylation done by the TCA cycle via the electron transport chain and therefore decreases oxygen consumption. The phenomenon is believed to have evolved as a competition mechanism (due to the antiseptic nature of ethanol) around the time when the first fruits on Earth fell from the trees. The Crabtree effect works by repressing respiration by the fermentation pathway, dependent on the substrate.
Ethanol formation in Crabtree-positive yeasts under strictly aerobic conditions was firstly thought to be caused by the inability of these organisms to increase the rate of respiration above a certain value. This critical value, above which alcoholic fermentation occurs, is dependent on the strain and the culture conditions. More recent evidences demonstrated that the occurrence of alcoholic fermentation might not be primarily due to a limited respiratory capacity, but could be caused by a limit in the cellular Gibbs energy dissipation rate.
For S. cerevisiae in aerobic conditions, glucose concentrations below 150 mg/L did not result in ethanol production. Above this value, ethanol was formed with rates increasing up to a glucose concentration of 1000 mg/L. Thus, above 150 mg/L glucose the organism exhibite |
https://en.wikipedia.org/wiki/Louisa%20Garrett%20Anderson | Louisa Garrett Anderson, CBE (28 July 1873 – 15 November 1943) was a medical pioneer, a member of the Women's Social and Political Union, a suffragette, and social reformer. She was the daughter of the founding medical pioneer Elizabeth Garrett Anderson, whose biography she wrote in 1939.
Anderson was the Chief Surgeon of the Women's Hospital Corps (WHC) and a Fellow of the Royal Society of Medicine. Her aunt, Dame Millicent Fawcett, was a British suffragist. Her partner was fellow doctor and suffragette Flora Murray. Her cousin was Dr Mona Chalmers Watson who also supported suffragettes and founded the Women's Army Auxiliary Corps.
Early life and education
Louisa Garrett Anderson was the oldest of three children of Elizabeth Garrett Anderson, the first woman to qualify as a doctor in Britain, co-founder of the London School of Medicine for Women and Britain's first elected woman Mayor. Her father was James George Skelton Anderson, co-owner of the Orient Steamship Company with his uncle Arthur Anderson. She was educated at St Leonards School in St Andrews and London School of Medicine for Women, where she received her Bachelor of Medicine and Bachelor of Surgery in 1898. Anderson received her Doctor of Medicine in 1900, enrolled in further postgraduate studies at Johns Hopkins Medical School and traveled to observe operations in Paris and Chicago.
Early career
Despite her education, Anderson was unable to join a general major hospital, as attitudes at the time opposed female doctors treating both men and women. As a result, in 1902, she joined the New Hospital for Women, a women's-only hospital founded by her mother which treated women and children. Anderson first worked as a surgical assistant and later as a senior surgeon. She performed gynaecological and general operations, and co-published a paper with the hospital pathologist in 1908 discussing her hysterectomy operations and dissecting the 265 cases of uterine cancer treated at the New Hospital for Women. |
https://en.wikipedia.org/wiki/Truncation%20error | In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process.
Examples
Infinite series
A summation series for is given by an infinite series such as
In reality, we can only use a finite number of these terms as it would take an infinite amount of computational time to make use of all of them. So let's suppose we use only three terms of the series, then
In this case, the truncation error is
Example A:
Given the following infinite series, find the truncation error for if only the first three terms of the series are used.
Solution
Using only first three terms of the series gives
The sum of an infinite geometrical series
is given by
For our series, and , to give
The truncation error hence is
Differentiation
The definition of the exact first derivative of the function is given by
However, if we are calculating the derivative numerically, has to be finite. The error caused by choosing to be finite is a truncation error in the mathematical process of differentiation.
Example A:
Find the truncation in calculating the first derivative of at using a step size of
Solution:
The first derivative of is
and at ,
The approximate value is given by
The truncation error hence is
Integration
The definition of the exact integral of a function from to is given as follows.
Let be a function defined on a closed interval of the real numbers, , and
be a partition of I, where
where and .
This implies that we are finding the area under the curve using infinite rectangles. However, if we are calculating the integral numerically, we can only use a finite number of rectangles. The error caused by choosing a finite number of rectangles as opposed to an infinite number of them is a truncation error in the mathematical process of integration.
Example A.
For the integral
find the truncation error if a two-segment left-hand Riemann sum is used with equal width of segments.
Soluti |
https://en.wikipedia.org/wiki/Flora%20Murray | Flora Murray (8 May 1869 – 28 July 1923) was a Scottish medical pioneer, and a member of the Women's Social and Political Union suffragettes. From 1914 to the end of her life, she lived with her partner and fellow doctor Louisa Garrett Anderson.
Early life and education
Murray was born on 8 May 1869 at Murraythwaite, Dumfries, Scotland, the daughter of Grace Harriet Murray (née Graham) and John Murray, a landowner and Royal Navy captain. Murray was the fourth of six children.
Murray attended school in Germany and London before attending the London Hospital in Whitechapel in 1890, as a probationer nurse, for a six-month course. Murray decided on her career in medicine and went on to study in the London School of Medicine for Women in 1897. She then worked as a Medical assistant for 18 months at an asylum at the Crichton Royal Institution in Dumfriesshire. This experience was crucial in her writing of her MD thesis called 'Asylum Organization and Management' (1905). She completed her medical education at Durham University, receiving her MB BSc in 1903, and MD in 1905. She received a Diploma in Public Health from the University of Cambridge in 1906.
During her time in Scotland, Murray lived in Edinburgh with Dr Elsie Inglis, founder of the Scottish Women's Hospitals movement. Historians such as Hamer and Jennings have argued that Murray had her "first serious lesbian relationship" with Elsie Inglis.
Career
Physician
In 1905 Murray was a medical officer at the Belgrave Hospital for Children in London and then an anaesthetist at the Chelsea Hospital for Women. In 1905 The Lancet published an article that she authored on the use of anaesthetic in children, titled Ethyl chloride as an anaesthetic for children.
Suffragette
Murray's hand in women's suffrage first started when she became a participant and activist of Millicent Fawcett's National Union of Women's Suffrage Societies. She then continued her work in women's suffrage as a supporter of Women's Social and Po |
https://en.wikipedia.org/wiki/Variable-range%20hopping | Variable-range hopping is a model used to describe carrier transport in a disordered semiconductor or in amorphous solid by hopping in an extended temperature range. It has a characteristic temperature dependence of
where is the conductivity and is a parameter dependent on the model under consideration.
Mott variable-range hopping
The Mott variable-range hopping describes low-temperature conduction in strongly disordered systems with localized charge-carrier states and has a characteristic temperature dependence of
for three-dimensional conductance (with = 1/4), and is generalized to d-dimensions
.
Hopping conduction at low temperatures is of great interest because of the savings the semiconductor industry could achieve if they were able to replace single-crystal devices with glass layers.
Derivation
The original Mott paper introduced a simplifying assumption that the hopping energy depends inversely on the cube of the hopping distance (in the three-dimensional case). Later it was shown that this assumption was unnecessary, and this proof is followed here. In the original paper, the hopping probability at a given temperature was seen to depend on two parameters, R the spatial separation of the sites, and W, their energy separation. Apsley and Hughes noted that in a truly amorphous system, these variables are random and independent and so can be combined into a single parameter, the range between two sites, which determines the probability of hopping between them.
Mott showed that the probability of hopping between two states of spatial separation and energy separation W has the form:
where α−1 is the attenuation length for a hydrogen-like localised wave-function. This assumes that hopping to a state with a higher energy is the rate limiting process.
We now define , the range between two states, so . The states may be regarded as points in a four-dimensional random array (three spatial coordinates and one energy coordinate), with the "distance" betw |
https://en.wikipedia.org/wiki/Magnetic%20tweezers | Magnetic tweezers (MT) are scientific instruments for the manipulation and characterization of biomolecules or polymers. These apparatus exert forces and torques to individual molecules or groups of molecules. It can be used to measure the tensile strength or the force generated by molecules.
Most commonly magnetic tweezers are used to study mechanical properties of biological macromolecules like DNA or proteins in single-molecule experiments. Other applications are the rheology of soft matter, and studies of force-regulated processes in living cells. Forces are typically on the order of pico- to nanonewtons (pN to nN). Due to their simple architecture, magnetic tweezers are a popular biophysical tool.
In experiments, the molecule of interest is attached to a magnetic microparticle. The magnetic tweezer is equipped with magnets that are used to manipulate the magnetic particles whose position is measured with the help of video microscopy.
Construction principle and physics of magnetic tweezers
A magnetic tweezers apparatus consists of magnetic micro-particles, which can be manipulated with the help of an external magnetic field. The position of the magnetic particles is then determined by a microscopic objective with a camera.
Magnetic particles
Magnetic particles for the operation in magnetic tweezers come with a wide range of properties and have to be chosen according to the intended application. Two basic types of magnetic particles are described in the following paragraphs; however there are also others like magnetic nanoparticles in ferrofluids, which allow experiments inside a cell.
Superparamagnetic beads
Superparamagnetic beads are commercially available with a number of different characteristics. The most common is the use of spherical particles of a diameter in the micrometer range. They consist of a porous latex matrix in which magnetic nanoparticles have been embedded. Latex is auto-fluorescent and may therefore be advantageous for the imaging of t |
https://en.wikipedia.org/wiki/North%20American%20Mycological%20Association | The North American Mycological Association (NAMA), is a non-profit organization of amateurs and professionals who are interested in fungi, including mushrooms, morels, truffles, molds, and related organisms. NAMA aims "to promote, pursue, and advance the science of mycology."
Membership
Membership is open to all persons interested in fungi, including both professionals and amateurs of any skill level.
Publications
The official journal of NAMA is McIlvainea: The Journal of Amateur Mycology, which is published bi-annually.
NAMA members also receive The Mycophile, NAMA's bi-monthly newsletter.
NAMA members also provide educational material for teaching and learning about fungi via their website.
Activities
Since 1961, NAMA has sponsored an annual foray, at which members meet to collect and identify mushrooms and other fungi. Each year the foray takes place in a different location in North America.
NAMA tracks North American mushroom poisoning cases (of humans and animals), and maintains a registry for submission of new cases. NAMA also provides a guide to mushroom poisoning symptoms.
See also
Mycological Society of America, NAMA's sister society for professional mycologists
External links
Official webpage of the North American Mycological Association
NAMA Registry of Mushroom Poisoning cases
Mycology organizations
Year of establishment missing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.