source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/TRPC
TRPC is a family of transient receptor potential cation channels in animals. TRPC channels form the subfamily of channels in humans most closely related to drosophila TRP channels. Structurally, members of this family possess a number of similar characteristics, including 3 or 4 ankyrin repeats near the N-terminus and a TRP box motif containing the invariant EWKFAR sequence at the proximal C-terminus. These channels are non-selectively permeable to cations, with a prevalence of calcium over sodium variable among the different members. Many of TRPC channel subunits are able to coassemble. The predominant TRPC channels in the mammalian brain are the TRPC 1,4 and 5 and they are densely expressed in corticolimbic brain regions, like the hippocampus, prefrontal cortex and lateral septum. These 3 channels are activated by the metabotropic glutamate receptor 1 agonist dihydroxyphenylglycine. In general, TRPC channels can be activated by phospholipase C stimulation, with some members also activated by diacylglycerol. There is at least one report that TRPC1 is also activated by stretching of the membrane and TRPC5 channels are activated by extracellular reduced thioredoxin. It has long been proposed that TRPC channels underlie the calcium release activated channels observed in many cell types. These channels open due to the depletion of intracellular calcium stores. Two other proteins, stromal interaction molecules (STIMs) and Orais, however, have more recently been implicated in this process. STIM1 and TRPC1 can coassemble, complicating the understanding of this phenomenon. TRPC6 has been implicated in late onset Alzheimer's disease. Role in cardiomyopathies Research on the role of TRPC channels in cardiomyopathies is still in progress. An upregulation of TRPC1, TRPC3, and TRPC6 genes are seen in heart disease states including fibroblast formation and cardiovascular disease. The TRPC channels are suspected of responding to an overload of hormonal and mechanical st
https://en.wikipedia.org/wiki/TRPV
TRPV is a family of transient receptor potential cation channels (TRP channels) in animals. All TRPVs are highly calcium selective. TRP channels are a large group of ion channels consisting of six protein families, located mostly on the plasma membrane of numerous human and animal cell types, and in some fungi. TRP channels were initially discovered in the trp mutant strain of the fruit fly Drosophila that displayed transient elevation of potential in response to light stimuli, and were therefore named "transient receptor potential" channels. The name now refers only to a family of proteins with similar structure and function, not to the mechanism of their activation. Later, TRP channels were found in vertebrates where they are ubiquitously expressed in many cell types and tissues. There are about 28 TRP channels that share some structural similarity to each other. These are grouped into two broad groups: group 1 includes TRPC ( "C" for canonical), TRPV ("V" for vanilloid), TRPM ("M" for melastatin), TRPN and TRPA. In group 2 there are TRPP ("P" for polycystic) and TRPML ("ML" for mucolipin). Structure Functional TRPV ion channels are tetrameric in structure and are either homo-tetrameric (four identical subunits) or hetero-tetrameric (a total of four subunits selected from two or more types of subunits). The four subunits are symmetrically arranged around the ion conduction pore. Although the extent of heteromerization has been the subject of some debate, the most recent research in this area suggest that all four thermosensitive TRPVs (1-4) can form heteromers with each other. This result is in line with the general observation that TRP coassembly tends to occur between subunits with high sequence similarities. How TRP subunits recognize and interact with each other is still poorly understood. The TRPV channel monomeric subunit components each contain six transmembrane (TM) domains (designated S1–S6) with a pore domain between the fifth (S5) and sixth (S6)
https://en.wikipedia.org/wiki/TRPM
TRPM is a family of transient receptor potential ion channels (M standing for wikt:melastatin). Functional TRPM channels are believed to form tetramers. The TRPM family consists of eight different channels, TRPM1–TRPM8. Unlike the TRPC and TRPV sub-families, TRPM subunits do not contain N-terminal ankyrin repeat motifs but, rather, contain entire functional proteins in their C-termini. TRPM6 and TRPM7, for example, contain functional α-kinase segments, which are a type of serine/threonine-specific protein kinase. Permeability and activation The relative permeability of calcium and magnesium varies widely among TRPM channels. TRPM4 and TRPM5 are impermeable to calcium. TRPM3, TRPM6 and TRPM7 are highly permeable to both calcium and magnesium. The mechanism of activation also varies greatly among TRPM channels. TRPM2 is activated by ADP-ribose adenosine 5'-diphosphoribose and functions as a sensor of redox status in cells. TRPM4 and TRPM5 are activated by intracellular calcium. TRPM8 can be activated by low temperatures, menthol, eucalyptol and icilin. Functions Among the functional responsibilities of the TRPM channels are: regulation of calcium oscillations after T cell activation and prevention of cardiac conduction disorders (TRPM4). modulation of insulin secretion and sensory transduction in taste cells (TRPM5). cold sensation (TRPM8). heat sensation and inflammatory pain (TRPM3). regulation of magnesium reabsorption in the kidneys and absorption in the intestines (TRPM6). regulation of cell adhesion (TRPM7). Genes TRPM1, TRPM2, TRPM3, TRPM4, TRPM5, TRPM6, TRPM7, TRPM8
https://en.wikipedia.org/wiki/Virginia%20Ragsdale
Virginia Ragsdale (December 13, 1870 – June 4, 1945) was a teacher and mathematician specializing in algebraic curves. She is most known as the creator of the Ragsdale conjecture. Early life Ragsdale was born on a farm in Jamestown, North Carolina the third child of John Sinclair Ragsdale and Emily Jane Idol. John was an officer in the Civil War, a teacher in the Flint Hill School, and later a state legislator. Virginia Ragsdale descended from Godfrey Ragsdale, a settler of the new Jamestown colony. Jamestown was raided by a native-American tribe in 1644 led by the uncle of Pocahontas, during which Godfrey and his wife were killed, but their infant son, Godfrey, Jr., survived. Ragsdale was then descended from the infant. Virginia documented her early years in a paper titled "Our Early Home and Childhood", writing: Study As a junior, Ragsdale entered Salem Academy, and graduated in 1887 as valedictorian with an extra diploma in piano. Ragsdale attended Guilford College in Greensboro, North Carolina, where she earned her B.S. in 1892. She was active in student life, establishing a Y.M.C.A. on campus, expanding collegiate athletics, and contributing to the formation the Guilford's Alumni Association. Ragsdale was awarded the first scholarship from Bryn Mawr College for the top scholar Guilford College. She studied physics at Bryn Mawr College, obtaining an A.B. degree in 1896. She was elected European fellow for the class of 1896, but waited a year before traveling, working as an assistant demonstrator in physics and mathematics graduate student at Bryn Mawr. Together with two of her colleagues (including Emilie Martin), she spent 1897-98 abroad at the University of Göttingen, attending lectures of Felix Klein and David Hilbert. After her return to the United States, she taught in Baltimore for three years until a second scholarship, by the Baltimore Association for the Promotion of University Education of Women, permitted her to return to Bryn Mawr college to
https://en.wikipedia.org/wiki/Hashimoto%27s%20encephalopathy
Hashimoto's encephalopathy, also known as steroid-responsive encephalopathy associated with autoimmune thyroiditis (SREAT), is a neurological condition characterized by encephalopathy, thyroid autoimmunity, and good clinical response to corticosteroids. It is associated with Hashimoto's thyroiditis, and was first described in 1966. It is sometimes referred to as a neuroendocrine disorder, although the condition's relationship to the endocrine system is widely disputed. It is recognized as a rare disease by the NIH Genetic and Rare Diseases Information Center. Up to 2005, almost 200 case reports of this disease were published. Between 1990 and 2000, 43 cases were published. Since that time, research has expanded and numerous cases are being reported by scientists around the world, suggesting that this rare condition is likely to have been significantly undiagnosed in the past. Over 100 scientific articles on Hashimoto's encephalopathy were published between 2000 and 2013. Signs and symptoms The onset of symptoms tends to be fairly gradual and to occur over 1-12 years. Symptoms of Hashimoto's encephalopathy may include: Personality changes Aggression Delusional behavior Concentration and memory problems Coma Disorientation Headaches Jerks in the muscles (myoclonus – 65% of cases) Lack of coordination (ataxia – 65% of cases) Partial paralysis on the right side Psychosis Seizures (60% of cases) Sleep abnormalities (55% of cases) Speech problems (transient aphasia – 80% of cases) Status epilepticus (20% of cases) Tremors (80% of cases) Pathogenesis The mechanism of pathogenesis is not known, but is thought to be an autoimmune disorder, similar to Hashimoto's thyroiditis, as its name suggests. Consistent with this hypothesis, autoantibodies to alpha-enolase have been found to be associated with Hashimoto's encephalopathy. Since enolase is the penultimate step in glycolysis, if it were inhibited (for example by being bound by autoantibodies), one
https://en.wikipedia.org/wiki/Spherical%20design
A spherical design, part of combinatorial design theory in mathematics, is a finite set of N points on the d-dimensional unit d-sphere Sd such that the average value of any polynomial f of degree t or less on the set equals the average value of f on the whole sphere (that is, the integral of f over Sd divided by the area or measure of Sd). Such a set is often called a spherical t-design to indicate the value of t, which is a fundamental parameter. The concept of a spherical design is due to Delsarte, Goethals, and Seidel, although these objects were understood as particular examples of cubature formulas earlier. Spherical designs can be of value in approximation theory, in statistics for experimental design, in combinatorics, and in geometry. The main problem is to find examples, given d and t, that are not too large; however, such examples may be hard to come by. Spherical t-designs have also recently been appropriated in quantum mechanics in the form of quantum t-designs with various applications to quantum information theory and quantum computing. Existence of spherical designs The existence and structure of spherical designs on the circle were studied in depth by Hong. Shortly thereafter, Seymour and Zaslavsky proved that such designs exist of all sufficiently large sizes; that is, given positive integers d and t, there is a number N(d,t) such that for every N ≥ N(d,t) there exists a spherical t-design of N points in dimension d. However, their proof gave no idea of how big N(d,t) is. Mimura constructively found conditions in terms of the number of points and the dimension which characterize exactly when spherical 2-designs exist. Maximally sized collections of equiangular lines (up to identification of lines as antipodal points on the sphere) are examples of minimal sized spherical 5-designs. There are many sporadic small spherical designs; many of them are related to finite group actions on the sphere. In 2013, Bondarenko, Radchenko, and Viazovska o
https://en.wikipedia.org/wiki/Sound-in-Syncs
Sound-in-Syncs is a method of multiplexing sound and video signals into a channel designed to carry video, in which data representing the sound is inserted into the line synchronising pulse of an analogue television waveform. This is used on point-to-point links within broadcasting networks, including studio/transmitter links (STL). It is not used for broadcasts to the public. History The technique was first developed by the BBC in the late 1960s. In 1966, The corporation's Research Department made a feasibility study of the use of pulse-code modulation (PCM) for transmitting television sound during the synchronising period of the video signal. This had several advantages: it removed the necessity for a separate sound link, reduced the possibility of operational errors and offered improved sound quality and reliability. Awards Sound-in-Syncs and its R&D engineers have won several awards, including: The Royal Television Society's Geoffrey Parr Award in 1972 A Queen's Award for Enterprise in 1974 In 1999, a Technology & Engineering Emmy Award Versions Original mono S-i-S In the original system, as applied to 625 line analogue TV, the audio signal was sampled twice during each television line and each sample converted to 10-bit PCM. Two such samples were inserted into the next line synchronising pulse. At the destination, the audio samples were converted back to analogue form and the video waveform restored to normal. Compandors operating on the signal before encoding and after decoding enabled the required signal-to-noise ratio to be achieved. As the PCM noise was predominantly high-pitched, the compandor only needed to operate on the high frequencies. Also, the compandor only operated at high audio levels, so that modulation of the noise by the companding would be masked by the relatively loud high-frequency audio components. A pilot tone at half the sampling frequency was transmitted to enable the expander to track the gain adjustment applied by the compress
https://en.wikipedia.org/wiki/Probability%20matching
Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, then the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances. The optimal Bayesian decision strategy (to maximize the number of correct predictions, see ) in such a case is to always predict "positive" (i.e., predict the majority category in the absence of other information), which has 60% chance of winning rather than matching which has 52% of winning (where p is the probability of positive realization, the result of matching would be , here ). The probability-matching strategy is of psychological interest because it is frequently employed by human subjects in decision and classification studies (where it may be related to Thompson sampling). The only case when probability matching will yield same results as Bayesian decision strategy mentioned above is when all class base rates are the same. So, if in the training set positive examples are observed 50% of the time, then the Bayesian strategy would yield 50% accuracy (1 × .5), just as probability matching (.5 ×.5 + .5 × .5).
https://en.wikipedia.org/wiki/Midpoint%20circle%20algorithm
In computer graphics, the midpoint circle algorithm is an algorithm used to determine the points needed for rasterizing a circle. It's a generalization of Bresenham's line algorithm. The algorithm can be further generalized to conic sections. and Van Aken. In machining (CNC), it is known as circular interpolation. Summary This algorithm draws all eight octants simultaneously, starting from each cardinal direction (0°, 90°, 180°, 270°) and extends both ways to reach the nearest multiple of 45° (45°, 135°, 225°, 315°). It can determine where to stop because when = , it has reached 45°. The reason for using these angles is shown in the above picture: As increases, it does not skip nor repeat any value until reaching 45°. So during the while loop, increments by 1 each iteration, and decrements by 1 on occasion, never exceeding 1 in one iteration. This changes at 45° because that is the point where the tangent is rise=run. Whereas rise>run before and rise<run after. The second part of the problem, the determinant, is far trickier. This determines when to decrement . It usually comes after drawing the pixels in each iteration, because it never goes below the radius on the first pixel. Because in a continuous function, the function for a sphere is the function for a circle with the radius dependent on (or whatever the third variable is), it stands to reason that the algorithm for a discrete(voxel) sphere would also rely on this Midpoint circle algorithm. But when looking at a sphere, the integer radius of some adjacent circles is the same, but it is not expected to have the same exact circle adjacent to itself in the same hemisphere. Instead, a circle of the same radius needs a different determinant, to allow the curve to come in slightly closer to the center or extend out farther. Algorithm The objective of the algorithm is to approximate the curve using pixels; in layman's terms every pixel should be approximately the same distance from the center.
https://en.wikipedia.org/wiki/Base%20rate
In probability and statistics, the base rate (also known as prior probabilities) is the class of probabilities unconditional on "featural evidence" (likelihoods). It is the proportion of individuals in a population who have a certain characteristic or trait. For example, if 1% of the population were medical professionals, and remaining 99% were not medical professionals, then the base rate of medical professionals is 1%. The method for integrating base rates and featural evidence is given by Bayes' rule. In the sciences, including medicine, the base rate is critical for comparison. In medicine a treatment's effectiveness is clear when the base rate is available. For example, if the control group, using no treatment at all, had their own base rate of 1/20 recoveries within 1 day and a treatment had a 1/100 base rate of recovery within 1 day, we see that the treatment actively decreases the recovery. The base rate is an important concept in statistical inference, particularly in Bayesian statistics. In Bayesian analysis, the base rate is combined with the observed data to update our belief about the probability of the characteristic or trait of interest. The updated probability is known as the posterior probability and is denoted as P(A|B), where B represents the observed data. For example, suppose we are interested in estimating the prevalence of a disease in a population. The base rate would be the proportion of individuals in the population who have the disease. If we observe a positive test result for a particular individual, we can use Bayesian analysis to update our belief about the probability that the individual has the disease. The updated probability would be a combination of the base rate and the likelihood of the test result given the disease status. The base rate is also important in decision-making, particularly in situations where the cost of false positives and false negatives are different. For example, in medical testing, a false negative (faili
https://en.wikipedia.org/wiki/Informating
Informating is a term coined by Shoshana Zuboff in her book In the Age of the Smart Machine (1988). It is the process that translates descriptions and measurements of activities, events and objects into information. By doing so, these activities become visible to the organization. Informating has both an empowering and oppressing influence. On the one hand, as information processes become more powerful, the access to information is pushed to ever lower levels of the organization. Conversely, information processes can be used to monitor what Zuboff calls human agency. Zuboff Description From In the Age of the Smart Machine, Informating is described as: What is it, then, that distinguishes information technology from earlier generations of machine technology? As information technology is used to reproduce, extend, and improve upon the process of substituting machines for human agency, it simultaneously accomplishes something quite different. The devices that automate by translating information into action also register data about those automated activities, thus generating new streams of information. For example, computer-based, numerically controlled machine tools or microprocessor-based sensing devices not only apply programmed instructions to equipment but also convert the current state of equipment, product, or process into data. Scanner devices in supermarkets automate the checkout process and simultaneously generate data that can be used for inventory control, warehousing, scheduling of deliveries, and market analysis. The same systems that make it possible to automate office transactions also create a vast overview of an organization's operations, with many levels of data coordinated and accessible for a variety of analytical efforts." (Zuboff, 1988; p. 9) Concept According to Zuboff, any activity, such as two friends using Facebook to communicate, can be said to be informating. In using tools such as Facebook, the two friends are converting their activity
https://en.wikipedia.org/wiki/Promela
PROMELA (Process or Protocol Meta Language) is a verification modeling language introduced by Gerard J. Holzmann. The language allows for the dynamic creation of concurrent processes to model, for example, distributed systems. In PROMELA models, communication via message channels can be defined to be synchronous (i.e., rendezvous), or asynchronous (i.e., buffered). PROMELA models can be analyzed with the SPIN model checker, to verify that the modeled system produces the desired behavior. An implementation verified with Isabelle/HOL is also available, as part of the Computer Aided Verification of Automata (CAVA) project. Files written in Promela traditionally have a .pml file extension. Introduction PROMELA is a process-modeling language whose intended use is to verify the logic of parallel systems. Given a program in PROMELA, Spin can verify the model for correctness by performing random or iterative simulations of the modeled system's execution, or it can generate a C program that performs a fast exhaustive verification of the system state space. During simulations and verifications, SPIN checks for the absence of deadlocks, unspecified receptions, and unexecutable code. The verifier can also be used to prove the correctness of system invariants and it can find non-progress execution cycles. Finally, it supports the verification of linear time temporal constraints; either with Promela never-claims or by directly formulating the constraints in temporal logic. Each model can be verified with SPIN under different types of assumptions about the environment. Once the correctness of a model has been established with SPIN, that fact can be used in the construction and verification of all subsequent models. PROMELA programs consist of processes, message channels, and variables. Processes are global objects that represent the concurrent entities of the distributed system. Message channels and variables can be declared either globally or locally within a process. Processe
https://en.wikipedia.org/wiki/Blackwell%20channel
The Blackwell channel is a deterministic broadcast channel model used in coding theory and information theory. It was first proposed by mathematician David Blackwell. In this model, a transmitter transmits one of three symbols to two receivers. For two of the symbols, both receivers receive exactly what was sent; the third symbol, however, is received differently at each of the receivers. This is one of the simplest examples of a non-trivial capacity result for a non-stochastic channel. Definition The Blackwell channel is composed of one input (transmitter) and two outputs (receivers). The channel input is ternary (three symbols) and is selected from {0, 1, 2}. This symbol is broadcast to the receivers; that is, the transmitter sends one symbol simultaneously to both receivers. Each of the channel outputs is binary (two symbols), labeled {0, 1}. Whenever a 0 is sent, both outputs receive a 0. Whenever a 1 is sent, both outputs receive a 1. When a 2 is sent, however, the first output is 0 and the second output is 1. Therefore, the symbol 2 is confused by each of the receivers in a different way. The operation of the channel is memoryless and completely deterministic. Capacity of the Blackwell channel The capacity of the channel was found by S. I. Gel'fand. It is defined by the region: 1. R1 = 1, 0 ≤ R2 ≤ 2. R1 = H(a), R2 = 1 − a, for  ≤ a ≤  3. R1 + R2 = log2 3, log2 3 - ≤ R1  ≤  4. R1 = 1 − a, R2 = H(a), for ≤ a ≤ 5. 0 ≤ R1 ≤ , R2 = 1 A solution was also found by Pinkser et al. (1995).
https://en.wikipedia.org/wiki/Map-based%20controller
In the field of control engineering, a map-based controller is a controller whose outputs are based on values derived from a pre-defined lookup table. The inputs to the controller are usually values taken from one or more sensors and are used to index the output values in the lookup table. By effectively placing the transfer function as discrete entries within a lookup table, engineers free to modify smaller sections or update the whole list of entries as required.
https://en.wikipedia.org/wiki/W3C%20Device%20Description%20Working%20Group
The W3C Device Description Working Group (DDWG), operating as part of the World Wide Web Consortium (W3C) Mobile Web Initiative (MWI), was chartered to "foster the provision and access to device descriptions that can be used in support of Web-enabled applications that provide an appropriate user experience on mobile devices." Mobile devices exhibit the greatest diversity of capabilities, and therefore present the greatest challenge to content adaptation technologies. The group published several documents, including a list of requirements for an interface to a Device Description Repository (DDR) and a standard interface meeting those requirements. The group was rechartered in 2006 to work in public towards the development of the Application Programming Interface (API) for a DDR. Early in 2007, the group launched a wiki and a blog to add to the public mailing list. The group subsequently published a formal vocabulary of core device properties, and an API called the DDR Simple API, which became a W3C Recommendation in December 2008. The group closed at the end of 2008, but with the intention of maintaining the Web pages, blog and wiki through W3C volunteer effort. Publications The DDWG published several W3C Working Group Notes and one W3C Recommendation. A W3C WG Note that articulates what the W3C and other organizations are doing or have already done with regards to device information. This document suggests an environment in which these technologies work together to meet the goals of content adaptation. The completed document was published on 31 October 2007. A W3C WG Note describing the ecosystem surrounding creation, maintenance and use of device descriptions. The completed document was published on 31 October 2007. A W3C WG Note describing a set of requirements for a reference repository of device descriptions. The completed document was published on 17 December 2007. A W3C WG Note describing a process to manage contributions to an initial core vocabulary,
https://en.wikipedia.org/wiki/Vandenbergh%20effect
The Vandenbergh effect is a phenomenon reported by J.G. Vandenbergh et al. in 1975, in which an early induction of the first estrous cycle in prepubertal female mice occurs as a result of exposure to the pheromone-laden urine of a sexually mature (dominant) male mouse. Physiologically, the exposure to male urine induces the release of GnRH, which provokes the first estrus. The Vandenbergh effect has also been seen with exposure to adult female mice. When an immature female mouse is exposed to the urine of mature female mouse, estrus is delayed in the prepubertal female. In this situation, GnRH is inhibited and therefore delays puberty in the juvenile female mouse. The Vandenbergh effect is caused by pheromones found in a male's urine. The male does not have to be present for this effect to take place; the urine alone is sufficient. These pheromones are detected by the vomeronasal organ in the septum of the female's nose. This occurs because the female body will only take the step to begin puberty if there are available mates around. She will not waste energy on puberty if there is no possibility of finding a mate. In addition to GnRH, exogenous estradiol has recently implicated as having a role in the Vandenbergh effect. Utilizing tritium-labeled estradiol implanted in male mice, researchers have been able to trace the pathways the estradiol takes once transmitted to a female. The estradiol was found in a multitude of regions within the females and appeared to enter her circulation nasally and through the skin. Their findings suggested that some aspects of the Vandenbergh effect as well as the Bruce effect may be related to exogenous estradiol from males. Additional studies have looked into the validity of estradiol's role in the Vandenbergh effect by means of exogenous estradiol placed in castrated rats. Castrated males were injected with either a control (oil) or estradiol in the oil vehicle. As expected, urinary androgens in the castrated males were below no
https://en.wikipedia.org/wiki/Whitten%20effect
The Whitten effect is stimulation, by male pheromones, of synchronous estrus in a female population. Social signals, or social stimuli, have an effect on reproduction in all mammals. For certain female mice, the pheromones contained in the urine of male mice can be such stimuli, inducing synchronous estrus. When the pheromones contained in the urine of male mice stimulate synchronous estrus in a population of female mice, it is known as the Whitten effect. This is a phenomenon observed by Wesley K. Whitten (1956, 1966, 1968), whereby male mouse pheromone-laden urine synchronizes the estrus cycle "among unisexually grouped females," and is an example of male-to-female pheromonal effects in mice, similar to the Bruce effect. The Whitten effect occurs when a group of female mice are exposed to the urine produced by a male mouse. The male’s urine contains certain volatile, or airborne, pheromones that affect the hormonal processes of the females that control their reproductive status. A sexually mature and viable male must produce the urine, as the pheromones that produce the Whitten effect are dependent on male sex hormones such as testosterone. The female mice do not require direct contact with the male’s urine to produce the Whitten effect, as the pheromone contained in the urine is airborne and therefore is taken up by the females through their olfactory system. The reproductive cycle of female mice in isolation is approximately 4 to 5 days, and the reproductive cycles of grouped females are often longer and more irregular. However, when grouped female mice are exposed to the pheromones contained in a male’s urine, the Whitten effect occurs, and the majority of the female mice will enter a new estrus cycle by the third day of exposure. However, there is little evidence for a similarly functioning vomeronasal, or olfactory, system (thought to be the sensory organ that initiates the Bruce, Vandenbergh, and Whitten effects) in humans. These differences, in p
https://en.wikipedia.org/wiki/OWL-S
OWL-S is an ontology built on top of Web Ontology Language (OWL) by the DARPA DAML program. It replaces the former DAML-S ontology. "OWL-S is an ontology, within the OWL-based framework of the Semantic Web, for describing Semantic Web Services. It will enable users and software agents to automatically discover, invoke, compose, and monitor Web resources offering services, under specified constraints." The OWL-S Ontology Development of OWL-S aims to enable the following tasks: Automatic Web service discovery: with the development of the Semantic Web, many Web Services will be available on the Web, performing the most various tasks. OWL-S will help software agents to discover the Web Service that would fulfill a specific need within some quality constraints, without the need for human intervention. Automatic Web service invocation: generally, it is necessary to write a specific program to invoke a Web Service, using its WSDL description. OWL-S will open the possibility for a software agent to automatically read the description of the Web Service's inputs and outputs and invoke the service. Automatic Web service composition and interoperation: in a Web where many services are available, it should be possible to perform a complex task, involving the coordinated invocation of various Web Services, based solely on the high-level description of the objective. OWL-S will help in the composition and interoperation of the Services in a way that will enable the automatic execution of this tasks. The OWL-S ontology has three main parts: the service profile, the process model and the grounding. The service profile is used to describe what the service does. This information is primary meant for human reading, and includes the service name and description, limitations on applicability and quality of service, publisher and contact information. The process model describes how a client can interact with the service. This description includes the sets of inputs, outputs, pr
https://en.wikipedia.org/wiki/Auditory%20masking
In audio signal processing, auditory masking occurs when the perception of one sound is affected by the presence of another sound. Auditory masking in the frequency domain is known as simultaneous masking, frequency masking or spectral masking. Auditory masking in the time domain is known as temporal masking or non-simultaneous masking. Masked threshold The unmasked threshold is the quietest level of the signal which can be perceived without a masking signal present. The masked threshold is the quietest level of the signal perceived when combined with a specific masking noise. The amount of masking is the difference between the masked and unmasked thresholds. Gelfand provides a basic example. Let us say that for a given individual, the sound of a cat scratching a post in an otherwise quiet environment is first audible at a level of 10 dB SPL. However, in the presence of a masking noise (for example, a vacuum cleaner that is running simultaneously) that same individual cannot detect the sound of the cat scratching unless the level of the scratching sound is at least 26 dB SPL. We would say that the unmasked threshold for that individual for the target sound (i.e., the cat scratching) is 10 dB SPL, while the masked threshold is 26 dB SPL. The amount of masking is simply the difference between these two thresholds: 16 dB. The amount of masking will vary depending on the characteristics of both the target signal and the masker, and will also be specific to an individual listener. While the person in the example above was able to detect the cat scratching at 26 dB SPL, another person may not be able to hear the cat scratching while the vacuum was on until the sound level of the cat scratching was increased to 30 dB SPL (thereby making the amount of masking for the second listener 20 dB). Simultaneous masking Simultaneous masking occurs when a sound is made inaudible by a noise or unwanted sound of the same duration as the original sound. For example, a powerful s
https://en.wikipedia.org/wiki/Cube%20with%20Magic%20Ribbons
Cube with Magic Ribbons is a lithograph print by the Dutch artist M. C. Escher first printed in 1957. It depicts two interlocking bands wrapped around the frame of a Necker cube. The bands have what Escher called small "nodules" or "buttonlike protuberances" that make use of the dome/crater illusion, an optical illusion characterized by shifting perception of depth from concave to convex depending on direction of light and shadow. Escher's interest in reversible perspectives, as seen in Cube with Magic Ribbons, can also be noted in an earlier work, Convex and Concave, first printed in 1955. Although the cube framework in Cube with Magic Ribbons by itself is perfectly possible, the interlocking of the "magical" bands within it is impossible. Escher scholar Bruno Ernst argues that this print is significant for being the first of four Escher drawings to use impossible object. However, there is debate as to whether the figure constitutes a true visual impossibility or is merely ambiguous, as the bands do not have continuous contours that unite their front and back faces, meaning they lose their visible boundaries when they cross over each other.
https://en.wikipedia.org/wiki/Phylogenetic%20comparative%20methods
Phylogenetic comparative methods (PCMs) use information on the historical relationships of lineages (phylogenies) to test evolutionary hypotheses. The comparative method has a long history in evolutionary biology; indeed, Charles Darwin used differences and similarities between species as a major source of evidence in The Origin of Species. However, the fact that closely related lineages share many traits and trait combinations as a result of the process of descent with modification means that lineages are not independent. This realization inspired the development of explicitly phylogenetic comparative methods. Initially, these methods were primarily developed to control for phylogenetic history when testing for adaptation; however, in recent years the use of the term has broadened to include any use of phylogenies in statistical tests. Although most studies that employ PCMs focus on extant organisms, many methods can also be applied to extinct taxa and can incorporate information from the fossil record. PCMs can generally be divided into two types of approaches: those that infer the evolutionary history of some character (phenotypic or genetic) across a phylogeny and those that infer the process of evolutionary branching itself (diversification rates), though there are some approaches that do both simultaneously. Typically the tree that is used in conjunction with PCMs has been estimated independently (see computational phylogenetics) such that both the relationships between lineages and the length of branches separating them is assumed to be known. Applications Phylogenetic comparative approaches can complement other ways of studying adaptation, such as studying natural populations, experimental studies, and mathematical models. Interspecific comparisons allow researchers to assess the generality of evolutionary phenomena by considering independent evolutionary events. Such an approach is particularly useful when there is little or no variation within species.
https://en.wikipedia.org/wiki/Von%20Ebner%27s%20gland
Von Ebner's glands, also called Ebner's glands or gustatory glands, are exocrine glands found in the mouth. More specifically, they are serous salivary glands which reside adjacent to the moats surrounding the circumvallate and foliate papillae just anterior to the posterior third of the tongue, anterior to the terminal sulcus. These glands are named after Victor von Ebner, an Austrian histologist. Von Ebner's glands secrete lingual lipase, beginning the process of lipid hydrolysis in the mouth. These glands empty their serous secretion into the base of the moats around the foliate and circumvallate papillae. This secretion presumably flushes material from the mouth to enable the taste buds to respond rapidly to changing stimuli. Von Ebner's glands are innervated by cranial nerve IX, the glossopharyngeal nerve. See also List of distinct cell types in the adult human body
https://en.wikipedia.org/wiki/Automated%20trading%20system
An automated trading system (ATS), a subset of algorithmic trading, uses a computer program to create buy and sell orders and automatically submits the orders to a market center or exchange. The computer program will automatically generate orders based on predefined set of rules using a trading strategy which is based on technical analysis, advanced statistical and mathematical computations or input from other electronic sources. These automated trading systems are mostly employed by investment banks or hedge funds, but are also available to private investors using simple online tools. Automated trading systems are often used with electronic trading in automated market centers, including electronic communication networks, "dark pools", and automated exchanges. Automated trading systems and electronic trading platforms can execute repetitive tasks at speeds orders of magnitude greater than any human equivalent. Traditional risk controls and safeguards that relied on human judgment are not appropriate for automated trading and this has caused issues such as the 2010 Flash Crash. New controls such as trading curbs or 'circuit breakers' have been put in place in some electronic markets to deal with automated trading systems. Mechanism The automated trading system determines whether an order should be submitted based on, for example, the current market price of an option and theoretical buy and sell prices. The theoretical buy and sell prices are derived from, among other things, the current market price of the security underlying the option. A look-up table stores a range of theoretical buy and sell prices for a given range of current market price of the underlying security. Accordingly, as the price of the underlying security changes, a new theoretical price may be indexed in the look-up table, thereby avoiding calculations that would otherwise slow automated trading decisions. A distributed processing on-line automated trading system uses structured messages to re
https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20Tisza
László Tisza (July 7, 1907 – April 15, 2009) was a Hungarian-born American physicist who was Professor of Physics Emeritus at MIT. He was a colleague of famed physicists Edward Teller, Lev Landau and Fritz London, and initiated the two-fluid theory of liquid helium. United States In 1941, Tisza immigrated to the United States and joined the faculty at the Massachusetts Institute of Technology. His research areas included theoretical physics and the history and philosophy of science, specifically on the foundation of thermodynamics and quantum mechanics. He taught at MIT until 1973. Publications Tisza was the author of the 1966 book, Generalized Thermodynamics. The 1982 publication, Physics as Natural Philosophy: Essays in Honor of László Tisza, was written by Tisza's colleagues and former students in honor of his 75th birthday. Affiliations He was a Fellow of The American Physical Society and American Academy of Arts and Sciences, a John Simon Guggenheim Fellow and had been a visiting professor at the University of Paris in Sorbonne. See also Vera and Laszlo Tisza House
https://en.wikipedia.org/wiki/ACTH%20receptor
The adrenocorticotropic hormone receptor or ACTH receptor also known as the melanocortin receptor 2 or MC2 receptor is a type of melanocortin receptor (type 2) which is specific for ACTH. A G protein–coupled receptor located on the external cell plasma membrane, it is coupled to Gαs and upregulates levels of cAMP by activating adenylyl cyclase. The ACTH receptor plays a role in immune function and glucose metabolism. Structure ACTH receptors are the shortest of the melanocortin receptor family and are the smallest known G-coupled receptors. Both human and bovine ACTH receptors are synthesized as 297 residue long proteins with 81% sequence homology. There are currently no available protein X-ray crystallography structures for the ACTH receptor available in the Protein Data Bank; while the ACTH receptor and the β2 adrenergic receptor are relatively distantly-related with a sequence identity of approximately 26%, MC2R investigators such as David Fridmanis have assumed that the folded surfaces of both receptors that are responsible for binding Gαs should be very similar and use conserved motifs. The full length sequence of MC2R includes seven hydrophobic domains that are predicted as transmembrane segments. In the third intracellular loop of the receptor a protein kinase A and protein kinase c phosphorylation motifs have been detected. ACTH receptors also require the binding of melanocortin-2 receptor accessory protein-1 (MRAP1) without which ACTH receptors cannot bind ACTH. Without MRAP, the receptor is degraded in the endoplasmic reticulum, but with MRAP, the receptor is glycosylated and expressed on the cell plasma membrane. Ligands MCR's have both endogenous agonists and antagonists. Agonists α-MSH and ACTH are both peptides derived from processed POMC, and both activate the other MCR's, but ACTH is the only agonist ligand for MC2R (ACTH receptor). This suggests that there is more protein-related specificity for binding MC2R. Antagonists Agouti-related protei
https://en.wikipedia.org/wiki/BIAS%20Peak
Peak is a digital audio editing application for the Macintosh, used primarily for stereo/mono recording, sample editing, loop creation, and CD mastering. It is commonly used by amateur and professional audio and video editors, mastering engineers, musicians, sound designers, artists, educators, and hobbyists. It was published by the now defunct company BIAS Inc. in several editions, with varying levels of features. Peak differs from Digital audio workstation-type audio editing applications in that most of its editing is done directly at the file level, without having to first create a project and import the audio to be edited into it. Peak can be assigned to many DAW-type applications as a supplemental external sample editor. When used this capacity, it is similar to having Peak's editing capabilities available as a plug-in, within the other application. BIAS Inc. ceased all business operations as of June, 2012. Reviews BIAS Peak Pro 6 XT review Sound on Sound magazine (January 2009) BIAS Peak v5 review Sound on Sound magazine (July 2006) BIAS Peak v4 review Sound on Sound magazine (May 2004) BIAS Peak v3.1 review Sound on Sound magazine (January 2003) BIAS Peak v2.02 review Sound on Sound magazine (June 1999) BIAS Peak v1.6 review Sound on Sound magazine (October 1997) BIAS Peak v1.0 review Sound on Sound magazine (September 1996] See also Adobe Audition DSP-Quattro Sound Studio WaveLab External links Peak User's Guide Peak web page Peak User Forum
https://en.wikipedia.org/wiki/Avian%20ecology%20field%20methods
There are many field methods available for conducting avian ecological research. They can be divided into three types: counts, nest monitoring, and capturing and marking. Basic counts Basic bird counts are a good way to estimate population size, detect changes in population size or species diversity, and determine the cause of the changes if environmental or habitat data is collected as well. Basic bird counts can be completed fairly easily and inexpensively, and they provide general information about the status of a bird population. Birds can be directly counted on breeding colonies, and at roosts, flocks, or Leks. Large diurnal migrants, like many raptors, can be counted as they pass through migration bottlenecks. Small nocturnal migrants are harder to count, but many advances have been made in the use of radar and microphone arrays to identify and count them. Point counts and area searches Perhaps the simplest method of counting birds is called a "point count", in which a trained observer records all the birds seen and heard from a point count station for a set period of time. A series of point counts completed over a fixed route can then be compared to the results of the same point counts in other seasons or years. A similar method, called an area search, involves searching throughout a fixed area for a set amount of time and recording the number of birds seen and heard. Nest monitoring Nest monitoring is essential for measuring the reproductive success of a population, which is important for identifying changes in a population's birth rate. Nests can be found either through systematic searching of the birds’ preferred habitat or by watching birds for behavioral clues. A researcher can then track the success of each nest by regularly checking nests for signs of hatching, fledging, or predation. Nest monitoring can also provide extremely valuable information about nesting behavior, habitat selection, and nest predation. Cameras can be used to study b
https://en.wikipedia.org/wiki/Abel%27s%20summation%20formula
In mathematics, Abel's summation formula, introduced by Niels Henrik Abel, is intensively used in analytic number theory and the study of special functions to compute series. Formula Let be a sequence of real or complex numbers. Define the partial sum function by for any real number . Fix real numbers , and let be a continuously differentiable function on . Then: The formula is derived by applying integration by parts for a Riemann–Stieltjes integral to the functions and . Variations Taking the left endpoint to be gives the formula If the sequence is indexed starting at , then we may formally define . The previous formula becomes A common way to apply Abel's summation formula is to take the limit of one of these formulas as . The resulting formulas are These equations hold whenever both limits on the right-hand side exist and are finite. A particularly useful case is the sequence for all . In this case, . For this sequence, Abel's summation formula simplifies to Similarly, for the sequence and for all , the formula becomes Upon taking the limit as , we find assuming that both terms on the right-hand side exist and are finite. Abel's summation formula can be generalized to the case where is only assumed to be continuous if the integral is interpreted as a Riemann–Stieltjes integral: By taking to be the partial sum function associated to some sequence, this leads to the summation by parts formula. Examples Harmonic numbers If for and then and the formula yields The left-hand side is the harmonic number . Representation of Riemann's zeta function Fix a complex number . If for and then and the formula becomes If , then the limit as exists and yields the formula where is the Riemann zeta function. This may be used to derive Dirichlet's theorem that has a simple pole with residue 1 at . Reciprocal of Riemann zeta function The technique of the previous example may also be applied to other Dirichlet series. If is the Möbi
https://en.wikipedia.org/wiki/CD33
CD33 or Siglec-3 (sialic acid binding Ig-like lectin 3, SIGLEC3, SIGLEC-3, gp67, p67) is a transmembrane receptor expressed on cells of myeloid lineage. It is usually considered myeloid-specific, but it can also be found on some lymphoid cells. It binds sialic acids, therefore is a member of the SIGLEC family of lectins. Structure The extracellular portion of this receptor contains two immunoglobulin domains (one IgV and one IgC2 domain), placing CD33 within the immunoglobulin superfamily. The intracellular portion of CD33 contains immunoreceptor tyrosine-based inhibitory motifs (ITIMs) that are implicated in inhibition of cellular activity. Function CD33 can be stimulated by any molecule with sialic acid residues such as glycoproteins or glycolipids. Upon binding, the immunoreceptor tyrosine-based inhibition motif (ITIM) of CD33, present on the cytosolic portion of the protein, is phosphorylated and acts as a docking site for Src homology 2 (SH2) domain-containing proteins like SHP phosphatases. This results in a cascade that inhibits phagocytosis in the cell. Alzheimer's disease CD33 controls microglial activation but in Alzheimer disease it goes overdrive in presence of amyloid and tau proteins, its expression is known to be tied to TREM2. Clinical significance CD33 is the target of gemtuzumab ozogamicin (trade name: Mylotarg®; Pfizer/Wyeth-Ayerst Laboratories), an antibody-drug conjugate (ADC) for the treatment of patients with acute myeloid leukemia. The drug is a recombinant, humanized anti-CD33 monoclonal antibody (IgG4 κ antibody hP67.6) covalently attached to the cytotoxic antitumor antibiotic calicheamicin (N-acetyl-γ-calicheamicin) via a bifunctional linker (4-(4-acetylphenoxy)butanoic acid). Several mechanisms of resistance to gemtuzumab ozogamicin have been elucidated. On September 1, 2017, the FDA approved Pfizer's Mylotarg. Gemtuzumab ozogamicin was initially approved by the U.S. Food and Drug Administration in 2000. However, during post
https://en.wikipedia.org/wiki/Toda%20oscillator
In physics, the Toda oscillator is a special kind of nonlinear oscillator. It represents a chain of particles with exponential potential interaction between neighbors. These concepts are named after Morikazu Toda. The Toda oscillator is used as a simple model to understand the phenomenon of self-pulsation, which is a quasi-periodic pulsation of the output intensity of a solid-state laser in the transient regime. Definition The Toda oscillator is a dynamical system of any origin, which can be described with dependent coordinate and independent coordinate , characterized in that the evolution along independent coordinate can be approximated with equation where , and prime denotes the derivative. Physical meaning The independent coordinate has sense of time. Indeed, it may be proportional to time with some relation like , where is constant. The derivative may have sense of velocity of particle with coordinate ; then can be interpreted as acceleration; and the mass of such a particle is equal to unity. The dissipative function may have sense of coefficient of the speed-proportional friction. Usually, both parameters and are supposed to be positive; then this speed-proportional friction coefficient grows exponentially at large positive values of coordinate . The potential is a fixed function, which also shows exponential growth at large positive values of coordinate . In the application in laser physics, may have a sense of logarithm of number of photons in the laser cavity, related to its steady-state value. Then, the output power of such a laser is proportional to and may show pulsation at oscillation of . Both analogies, with a unity mass particle and logarithm of number of photons, are useful in the analysis of behavior of the Toda oscillator. Energy Rigorously, the oscillation is periodic only at . Indeed, in the realization of the Toda oscillator as a self-pulsing laser, these parameters may have values of order of ; during several
https://en.wikipedia.org/wiki/Heavy%20fermion%20material
In solid-state physics, heavy fermion materials are a specific type of intermetallic compound, containing elements with 4f or 5f electrons in unfilled electron bands. Electrons are one type of fermion, and when they are found in such materials, they are sometimes referred to as heavy electrons. Heavy fermion materials have a low-temperature specific heat whose linear term is up to 1000 times larger than the value expected from the free electron model. The properties of the heavy fermion compounds often derive from the partly filled f-orbitals of rare-earth or actinide ions, which behave like localized magnetic moments. The name "heavy fermion" comes from the fact that the fermion behaves as if it has an effective mass greater than its rest mass. In the case of electrons, below a characteristic temperature (typically 10 K), the conduction electrons in these metallic compounds behave as if they had an effective mass up to 1000 times the free particle mass. This large effective mass is also reflected in a large contribution to the resistivity from electron-electron scattering via the Kadowaki–Woods ratio. Heavy fermion behavior has been found in a broad variety of states including metallic, superconducting, insulating and magnetic states. Characteristic examples are CeCu6, CeAl3, CeCu2Si2, YbAl3, UBe13 and UPt3. Historical overview Heavy fermion behavior was discovered by K. Andres, J.E. Graebner and H.R. Ott in 1975, who observed enormous magnitudes of the linear specific heat capacity in CeAl3. While investigations on doped superconductors led to the conclusion that the existence of localized magnetic moments and superconductivity in one material was incompatible, the opposite was shown, when in 1979 Frank Steglich et al. discovered heavy fermion superconductivity in the material CeCu2Si2. The discovery of a quantum critical point and non-Fermi liquid behavior in the phase diagram of heavy fermion compounds by H. von Löhneysen et al. in 1994 led to a new rise of i
https://en.wikipedia.org/wiki/Thermal%20effective%20mass
The thermal effective mass of electrons in a metal is the apparent mass due to interactions with the periodic potential of the crystal lattice, with phonons (e.g. phonon drag), and interaction with other electrons. The resulting effective mass of electrons contributes to the electronic heat capacity of the metal, leading to deviations from the heat capacity of a free electron gas.
https://en.wikipedia.org/wiki/Phonon%20drag
Phonon drag is an increase in the effective mass of conduction electrons or valence holes due to interactions with the crystal lattice in which the electron moves. As an electron moves past atoms in the lattice its charge distorts or polarizes the nearby lattice. This effect leads to a decrease in the electron (or hole, as may be the case) mobility, which results in a decreased conductivity. However, as the magnitude of the Seebeck coefficient increases with phonon drag, it may be beneficial in a thermoelectric material for direct energy conversion applications. The magnitude of this effect is typically appreciable only at low temperatures (<200 K). Phonons are not always in local thermal equilibrium; they move against the thermal gradient. They lose momentum by interacting with electrons (or other carriers) and imperfections in the crystal. If the phonon-electron interaction is predominant, the phonons will tend to push the electrons to one end of the material, losing momentum in the process. This contributes to the already present thermoelectric field. This contribution is most important in the temperature region where phonon-electron scattering is predominant. This happens for where θD is the Debye temperature. At lower temperatures there are fewer phonons available for drag, and at higher temperatures they tend to lose momentum in phonon-phonon scattering instead of phonon-electron scattering. This region of the Seebeck coefficient-versus-temperature function is highly variable under a magnetic field.
https://en.wikipedia.org/wiki/Plastic%20Logic
Plastic Logic Germany develops and manufactures electrophoretic displays (EPD), based on organic thin-film transistor (OTFT) technology, in Dresden, Germany. Originally a spin-off company from the Cavendish Laboratory at the University of Cambridge, the company was founded in 2000 by Richard Friend, Henning Sirringhaus and Stuart Evans and specialised in polymer transistors and plastic electronics. In February 2015, the company announced that the technology development and manufacturing parts of Plastic Logic would be separated and would go forward as independent companies, in order to generate focus while addressing a range of opportunities available in identified markets. The manufacturing plant in Dresden, Germany, which develops, manufactures and sells a range of flexible EPD, operates independently under the name Plastic Logic Germany. Plastic Logic opened the first mini-fabrication plant on November 11, 2003 in Cambridge, UK. A factory for the mass-production of the display units was opened on September 17, 2008 in Dresden, Germany. Plastic Logic announced its first plastic screen device on November 30, 2004, to be used by Siemens Communications in their mobile devices. This was followed by the announcement of an ereader called the QUE proReader. However, by August 2010, they had cancelled the QUE proReader. In September 2011 the company announced Plastic Logic 100 aimed to bring e-textbooks to Russian schools. In January 2011 the company received $280m in venture capital: $230m into the equity of Plastic Logic from Rusnano and $50m from Oak Investment Partners, a multi-stage venture capital firm. In May 2012 Plastic Logic revealed a ‘Plastic Inside’ strategy – selling its plastic back-planes, sensors and tags for customers to incorporate into other products. On May 17, 2012, Plastic Logic announced that they were abandoning plans to manufacture their own e-reader devices (focusing instead on licensing their existing technology), shutting down their US
https://en.wikipedia.org/wiki/Commutant%20lifting%20theorem
In operator theory, the commutant lifting theorem, due to Sz.-Nagy and Foias, is a powerful theorem used to prove several interpolation results. Statement The commutant lifting theorem states that if is a contraction on a Hilbert space , is its minimal unitary dilation acting on some Hilbert space (which can be shown to exist by Sz.-Nagy's dilation theorem), and is an operator on commuting with , then there is an operator on commuting with such that and Here, is the projection from onto . In other words, an operator from the commutant of T can be "lifted" to an operator in the commutant of the unitary dilation of T. Applications The commutant lifting theorem can be used to prove the left Nevanlinna-Pick interpolation theorem, the Sarason interpolation theorem, and the two-sided Nudelman theorem, among others.
https://en.wikipedia.org/wiki/Sz.-Nagy%27s%20dilation%20theorem
The Sz.-Nagy dilation theorem (proved by Béla Szőkefalvi-Nagy) states that every contraction on a Hilbert space has a unitary dilation to a Hilbert space , containing , with where is the projection from onto . Moreover, such a dilation is unique (up to unitary equivalence) when one assumes K is minimal, in the sense that the linear span of is dense in K. When this minimality condition holds, U is called the minimal unitary dilation of T. Proof For a contraction T (i.e., (), its defect operator DT is defined to be the (unique) positive square root DT = (I - T*T)½. In the special case that S is an isometry, DS* is a projector and DS=0, hence the following is an Sz. Nagy unitary dilation of S with the required polynomial functional calculus property: Returning to the general case of a contraction T, every contraction T on a Hilbert space H has an isometric dilation, again with the calculus property, on given by Substituting the S thus constructed into the previous Sz.-Nagy unitary dilation for an isometry S, one obtains a unitary dilation for a contraction T: Schaffer form The Schaffer form of a unitary Sz. Nagy dilation can be viewed as a beginning point for the characterization of all unitary dilations, with the required property, for a given contraction. Remarks A generalisation of this theorem, by Berger, Foias and Lebow, shows that if X is a spectral set for T, and is a Dirichlet algebra, then T has a minimal normal δX dilation, of the form above. A consequence of this is that any operator with a simply connected spectral set X has a minimal normal δX dilation. To see that this generalises Sz.-Nagy's theorem, note that contraction operators have the unit disc D as a spectral set, and that normal operators with spectrum in the unit circle δD are unitary.
https://en.wikipedia.org/wiki/Kuratowski%27s%20free%20set%20theorem
Kuratowski's free set theorem, named after Kazimierz Kuratowski, is a result of set theory, an area of mathematics. It is a result which has been largely forgotten for almost 50 years, but has been applied recently in solving several lattice theory problems, such as the congruence lattice problem. Denote by the set of all finite subsets of a set . Likewise, for a positive integer , denote by the set of all -elements subsets of . For a mapping , we say that a subset of is free (with respect to ), if for any -element subset of and any , . Kuratowski published in 1951 the following result, which characterizes the infinite cardinals of the form . The theorem states the following. Let be a positive integer and let be a set. Then the cardinality of is greater than or equal to if and only if for every mapping from to , there exists an -element free subset of with respect to . For , Kuratowski's free set theorem is superseded by Hajnal's set mapping theorem.
https://en.wikipedia.org/wiki/RARS
RARS is an acronym for Robot Auto Racing Simulator. It is an open source 3D racing simulator. RARS is designed to enable pre-programmed AI drivers to race against one another. RARS was used as the base for TORCS. It was used as an example in the book Intelligent Information Processing and Web Mining by Mieczysław Kłopotek. It was a monthly on-going challenge for practitioners of Artificial Intelligence and real-time adaptive optimal control. It consists of a simulation of the physics of cars racing on a track, a graphic display of the race, and a separate control program (robot "driver") for each car. Each participant could submit a robot (a file written in C++) which controlled the car and competed to win the race. The input was the road and cars in front of it. The output was the driver wheel and driver accelerator position. RARS was downloaded from its main repository on SourceForge.net between 2000 and May 2017 almost 100,000 times.
https://en.wikipedia.org/wiki/Palo%20%28OLAP%20database%29
Palo is a memory resident multidimensional (online analytical processing (OLAP) or multidimensional online analytical processing (MOLAP)) database server and typically used as a business intelligence tool for controlling and budgeting purposes with spreadsheet software acting as the user interface. Beyond the multidimensional data concept, Palo enables multiple users to share one centralised data storage (single version of the truth). This type of database is suitable to handle complex data models for business management and statistics. Apart from multidimensional queries, data can also be written back and consolidated in real-time. To give rapid access to all data, Palo stores them in the memory during run time. The server is available as open-source and proprietary software. Jedox was founded by Kristian Raue in 2002 and developed by Jedox AG, a company HQed in Freiburg, Germany. The firm currently employs approximately 300 people. Kristian Raue's departure from Jedox was announced in June 2014. Features Palo for Excel is an open source plug-in for Microsoft Excel. There is also an open source plug-in for OpenOffice.org named PalOOCa (discontinued), with Java and web client also available from the JPalo project. Palo can also be integrated into other systems via its client libraries for Java, PHP, C/C++, or .NET Framework. It is fairly easy to communicate with Palo OLAP Server, since it uses representational state transfer (REST). Starting in October 2008, Palo supports XML for Analysis and MultiDimensional eXpressions (MDX) APIs for connectivity, and OLE DB for OLAP interface which allows standard Excel pivot tables to serve as a client tool. Starting September 2011, Palo supports SDX dialect of LINQ. Palo also provides a web-based spreadsheet interface called Palo Web. Architecture Palo Suite is a tightly integrated framework consisting of: Palo MOLAP Server, Palo ETL Server, Palo Web (Palo Spreadsheet - Connection, User, ETL, File and Report Manager), Pal
https://en.wikipedia.org/wiki/Power%20shuttle
A power shuttle is an additional unit used in transmissions and is generally used in agricultural tractors. While the vehicle is moving forwards, the driver can pull a lever that makes it stop and go backwards at the same speed. Power Shuttles are also known under various trade names including Power Reverser In forward/reverse position of the F-R lever, the pressure is built in the system due to flow to wet clutch. During, the pressure rise,F-R clutch is in energized condition which makes the vehicle to move forward/Reverse direction. Flow to tank line is blocked during the flow to F/R clutch. Power shuttle are incorporated in transmissions in three forms: counter-shaft, full planetary (power shifts), and CVT transmissions. Counter shaft transmissions In this case generally forward reverse synchronizers are replaced by the multi-plate friction clutches. Typically the multi-plate clutches are arranged on the main shaft or on the counter shaft. The forward reverse section of the gear box is generally located in the forward section as close to the engine as possible. This is beneficial to the forward reverse control elements as they are not subjected to the high relative torque. The challenge involved in providing this feature in the existing transmissions is the complex shaft arrangement. This problem arises due to the limitation of centre distance between the two shafts and fixed axial dimensions due to the vehicle size limitations. Full planetary (power shift) In this type of transmissions the planetary action is used for providing reverse or forward action to the gear box. CVT/IVT Here the automatic nature of the gear box takes care of the direction change by either engaging planetary sets or by engaging multi-plate clutches or by some other means. Features of power shuttle transmissions. Power shuttle transmissions were invented for mining and earth moving applications. The needs in the areas were: Quick shuttle response Left hand shuttle lever (so that rig
https://en.wikipedia.org/wiki/Clean%20Slate%20Program
The Clean Slate Program was an interdisciplinary research program at Stanford University which considered how the Internet could be redesigned with a "clean slate", without the accumulated complexity of existing systems but using the experience gained in their decades of development. Its program director was Nick McKeown. Program outline Clean Slate was based on the belief that the current Internet has significant deficiencies that need to be solved before it can become a unified global communication infrastructure, and that the Internet's shortcomings will not be resolved by the conventional incremental and backward-compatible style of academic and industrial networking research. The research program focused on unconventional, bold, and long-term research that tries to break the network's ossification. To this end, the program was characterized by two research questions: "With what we know today, if we were to start again with a clean slate, how would we design a global communications infrastructure?" "How should the Internet look in upcoming 15 years?" Program coordinators identified five key areas for research: Network architecture Heterogeneous applications Heterogeneous physical-layer technologies Security Economics and policy The Clean Slate Program ceased in January 2012, after spawning four major follow-up projects: Internet Infrastructure: OpenFlow and Software Defined Networking Mobile Internet: POMI 2020 Mobile Social Networking: MobiSocial Data Center: Stanford Experimental Data Center Lab
https://en.wikipedia.org/wiki/Prehormone
A prehormone is a biochemical substance secreted by glandular tissue and has minimal or no significant biological activity, but it is converted in peripheral tissues into an active hormone. Calcifediol is an example of a prehormone which is produced by hydroxylation of vitamin D3 (cholecalciferol) in the liver. Another example is adrenal androgens like dehydroepiandrosterone and androstenedione, which can be converted into testosterone and dihydrotestosterone. See also Prohormone
https://en.wikipedia.org/wiki/Late%20move%20reductions
In computer chess, and in other games that computers play, late move reductions is a non-game-specific enhancement to the alpha–beta algorithm and its variants which attempts to examine a game search tree more efficiently. It uses the assumption that good game-specific move ordering causes a program to search the most likely moves early. If a cut-off is going to happen in a search, the first few moves are the ones most likely to cause them. In games like chess, most programs search winning captures and "killer moves" first. Late move reductions will reduce the search depth for moves searched later at a given node. This allows the program to search deeper along the critical lines, and play better. Most chess programs will search the first several moves at a node to full depth. Often, they do not reduce moves considered to be very tactical, such as captures or promotions. If the score of the move at a reduced depth is smaller than the alpha, the move is assumed to be bad. However, if the score is larger than alpha, the reduced search tells us nothing so we will have to do a full search (fail-low). This search reduction can lead to a different search space than the pure alpha–beta method which can give different results. Care must be taken to select the reduction criteria or the search will miss some deep threats. External links An Introduction to Late Move Reductions Computer chess Search algorithms
https://en.wikipedia.org/wiki/Numerical%20continuation
Numerical continuation is a method of computing approximate solutions of a system of parameterized nonlinear equations, The parameter is usually a real scalar, and the solution an n-vector. For a fixed parameter value , maps Euclidean n-space into itself. Often the original mapping is from a Banach space into itself, and the Euclidean n-space is a finite-dimensional Banach space. A steady state, or fixed point, of a parameterized family of flows or maps are of this form, and by discretizing trajectories of a flow or iterating a map, periodic orbits and heteroclinic orbits can also be posed as a solution of . Other forms In some nonlinear systems, parameters are explicit. In others they are implicit, and the system of nonlinear equations is written where is an n-vector, and its image is an n-1 vector. This formulation, without an explicit parameter space is not usually suitable for the formulations in the following sections, because they refer to parameterized autonomous nonlinear dynamical systems of the form: However, in an algebraic system there is no distinction between unknowns and the parameters. Periodic motions A periodic motion is a closed curve in phase space. That is, for some period , The textbook example of a periodic motion is the undamped pendulum. If the phase space is periodic in one or more coordinates, say , with a vector , then there is a second kind of periodic motions defined by for every integer . The first step in writing an implicit system for a periodic motion is to move the period from the boundary conditions to the ODE: The second step is to add an additional equation, a phase constraint, that can be thought of as determining the period. This is necessary because any solution of the above boundary value problem can be shifted in time by an arbitrary amount (time does not appear in the defining equations—the dynamical system is called autonomous). There are several choices for the phase constraint. If is a know
https://en.wikipedia.org/wiki/Ken%20Ono
Ken Ono (born March 20, 1968) is an American mathematician who specializes in number theory, especially in integer partitions, modular forms, umbral moonshine, the Riemann Hypothesis and the fields of interest to Srinivasa Ramanujan. He is the STEM Advisor to the Provost and the Marvin Rosenblum Professor of Mathematics at the University of Virginia. Early life and education Ono was born on March 20, 1968, in Philadelphia, Pennsylvania. He is the son of mathematician Takashi Ono, who emigrated from Japan to the United States after World War II. His older brother, immunologist and university president Santa J. Ono, was born while Takashi Ono was in Canada working at the University of British Columbia, but by the time Ken Ono was born the family had returned to the US for a position at the University of Pennsylvania. In the 1980s, Ono attended Towson High School, but he dropped out. He later enrolled at the University of Chicago without a high school diploma. There he raced bicycles, and he was a member of the Pepsi–Miyata Cycling Team. He received his BA from the University of Chicago in 1989, where he was a member of the Psi Upsilon fraternity. He earned his PhD in 1993 at UCLA where his advisor was Basil Gordon. Initially he planned to study medicine, but later switched to mathematics. He attributes his interest in mathematics to his father. Career Ono worked as an instructor at Woodbury University from 1991 to 1993, as a visiting assistant professor at the University of Georgia from 1993 to 1994, and as a visiting assistant professor at the University of Illinois at Urbana-Champaign from 1994 to 1995. He was a member of the Institute for Advanced Study from 1995 to 1997. Ono worked at Pennsylvania State University from 1997 to 2000 as an assistant professor and then as the Louis A. Martarano Professor of Mathematics. He moved to the University of Wisconsin-Madison as an associate professor in 1999, and later became the Solle P. and Margaret Manasse Professor
https://en.wikipedia.org/wiki/Fraction%20of%20variance%20unexplained
In statistics, the fraction of variance unexplained (FVU) in the context of a regression task is the fraction of variance of the regressand (dependent variable) Y which cannot be explained, i.e., which is not correctly predicted, by the explanatory variables X. Formal definition Suppose we are given a regression function yielding for each an estimate where is the vector of the ith observations on all the explanatory variables. We define the fraction of variance unexplained (FVU) as: where R2 is the coefficient of determination and VARerr and VARtot are the variance of the residuals and the sample variance of the dependent variable. SSerr (the sum of squared predictions errors, equivalently the residual sum of squares), SStot (the total sum of squares), and SSreg (the sum of squares of the regression, equivalently the explained sum of squares) are given by Alternatively, the fraction of variance unexplained can be defined as follows: where MSE(f) is the mean squared error of the regression function ƒ. Explanation It is useful to consider the second definition to understand FVU. When trying to predict Y, the most naive regression function that we can think of is the constant function predicting the mean of Y, i.e., . It follows that the MSE of this function equals the variance of Y; that is, SSerr = SStot, and SSreg = 0. In this case, no variation in Y can be accounted for, and the FVU then has its maximum value of 1. More generally, the FVU will be 1 if the explanatory variables X tell us nothing about Y in the sense that the predicted values of Y do not covary with Y. But as prediction gets better and the MSE can be reduced, the FVU goes down. In the case of perfect prediction where for all i, the MSE is 0, SSerr = 0, SSreg = SStot, and the FVU is 0. See also Coefficient of determination Correlation Explained sum of squares Lack-of-fit sum of squares Linear regression Regression analysis Mean absolute scaled error
https://en.wikipedia.org/wiki/Distributive%20homomorphism
A congruence θ of a join-semilattice S is monomial, if the θ-equivalence class of any element of S has a largest element. We say that θ is distributive, if it is a join, in the congruence lattice Con S of S, of monomial join-congruences of S. The following definition originates in Schmidt's 1968 work and was subsequently adjusted by Wehrung. Definition (weakly distributive homomorphisms). A homomorphism μ : S → T between join-semilattices S and T is weakly distributive, if for all a, b in S and all c in T such that μ(c) ≤ a ∨ b, there are elements x and y of S such that c ≤ x ∨ y, μ(x) ≤ a, and μ(y) ≤ b. Examples: (1) For an algebra B and a reduct A of B (that is, an algebra with same underlying set as B but whose set of operations is a subset of the one of B), the canonical from Conc A to Conc B is weakly distributive. Here, Conc A denotes the of all compact congruences of A. (2) For a convex sublattice K of a lattice L, the canonical from Conc K to Conc L is weakly distributive.
https://en.wikipedia.org/wiki/Microsoft%20PowerToys
Microsoft PowerToys is a set of freeware system utilities designed for power users developed by Microsoft for use on the Windows operating system. These programs add or change features to maximize productivity or add more customization. PowerToys are available for Windows 95, Windows XP, Windows 10 and Windows 11. The PowerToys for Windows 10 and Windows 11 are free and open-source software licensed under the MIT License and hosted on GitHub. PowerToys for Windows 95 PowerToys for Windows 95 was the first version of Microsoft PowerToys and included 15 tools for power users. It included Tweak UI, a system utility for tweaking the more obscure settings in Windows. In most cases, Tweak UI exposed settings that were otherwise only accessible by directly modifying Windows Registry. Included components The following PowerToys for Windows 95 were available: CabView opened cabinet files like ordinary folders; CDAutoPlay made AutoPlay work on any non-audio CD; Command Prompt Here allowed the user to start a command prompt from any folder in Windows Explorer by right-clicking (native in Windows Vista onwards); Contents Menu allowed users to access folders and files from a context menu without having to open their folders; Desktop Menu allowed users to open items on the desktop from a menu on the Taskbar; Explore From Here enabled users to open Windows Explorer view from any folder such so that the folder acts as the root level folder; FindX added drag-and-drop capabilities to Find (later called Search) menu; FlexiCD allowed users to play an audio CD from the Taskbar; Quick Res allowed users to quickly change the screen resolution; Round Clock added an analog round clock without a square window; Send To X consisted of Shell extensions which added several commonly accessed locations such as clipboard, desktop, command-line or any folder to the Send To context menu in Explorer; Shortcut Target Menu allowed users to access the target file a shortcut is pointing to from the co
https://en.wikipedia.org/wiki/Supernode%20%28circuit%29
In circuit theory, a supernode is a theoretical construct that can be used to solve a circuit. This is done by viewing a voltage source on a wire as a point source voltage in relation to other point voltages located at various nodes in the circuit, relative to a ground node assigned a zero or negative charge. A supernode exists when an ideal voltage source appears between any two nodes of an electric circuit. Each supernode contains two nodes, one non-reference node and another node that may be a second non-reference node or the reference node. Supernodes containing the reference node have one node voltage variable. For nodal analysis, the supernode construct is only required between two non-reference nodes. Nodal analysis It is related to Kirchhoff's Current Law which states that the total or algebraic sum of currents meeting at a junction or node is zero. Every junction where two or more branches meet is a node. One of the nodes in the network is taken as reference node. If there are n nodes in any network, the number of simultaneous equation to be solved will be (n-1). See also Node
https://en.wikipedia.org/wiki/Circle%20of%20stars
A circle of stars often represents unity, solidarity and harmony in flags, seals and signs, and is also seen in iconographic motifs related to the Woman of the Apocalypse as well as in Baroque allegoric art that sometimes depicts the Crown of Immortality. Woman of the Apocalypse The New Testament's Book of Revelation (12:1, 2 & 5) describes the Woman of the Apocalypse: And there appeared a great wonder in heaven; a woman clothed with the sun, and the moon under her feet, and upon her head a crown of twelve stars. And she being with child cried, travailing in birth. .... And she brought forth a man child, who was to rule all nations with a rod of iron:and her child was caught up unto God, and to his throne. In Catholic tradition she has been identified with the Blessed Virgin Mary, especially in connection with the Immaculate Conception. Mary is often pictured with a crown or Circle of Stars. The doctrine of the Immaculate Conception was somewhat controversial in the medieval church, and the liturgical Office for the feast was only established in 1615. In 1649, Francisco Pacheco (father-in-law of Velázquez) published his Art of Painting firmly establishing the detailed correct iconography for paintings of the Virgin of the Immaculate Conception, which included the circle of stars (he also advised the inquisition in Seville on artistic matters). This was followed by Murillo and his school in very many paintings, and influenced non-Spanish depictions. European Flag The European flag, first adopted by the Council of Europe, consists of 12 golden stars in a circle on a blue background. The stars symbolise the ideals of unity, solidarity and harmony among the peoples of Europe. The number of stars has nothing to do with the number of member countries, though the circle is a symbol of unity. Arsène Heitz, one of the flag designers, in 1987 revealed that his inspiration was the crown of twelve stars of the Woman of the Apocalypse, often found in modern Marian icon
https://en.wikipedia.org/wiki/Apolipoprotein%20H
β2-glycoprotein 1, also known as beta-2 glycoprotein 1 and Apolipoprotein H (Apo-H), is a 38 kDa multifunctional plasma protein that in humans is encoded by the APOH gene. One of its functions is to bind cardiolipin. When bound, the structure of cardiolipin and β2-GP1 both undergo large changes in structure. Within the structure of Apo-H is a stretch of positively charged amino acids (protein sequence positions 282-287), Lys-Asn-Lys-Glu-Lys-Lys, are involved in phospholipid binding (see image on right). β2-GP1 has a complex involvement in agglutination. It appears to alter adenosine diphosphate (ADP)-mediated agglutination of platelets. Normally, β2-GP1 assumes an anticoagulation activity in serum (by inhibiting coagulation factors); however, changes in blood factors can result in a reversal of that activity. Although previously referred to as apolipoprotein H, it is not present in appreciable quantities in the lipoprotein fractions, so ApoH is therefore thought to be a misnomer. Inhibitory activities β2-GP1 appears to completely inhibit serotonin release by the platelets and prevents subsequent waves of the ADP-induced aggregation. The activity of β2-GP1 appears to involve the binding of agglutinating, negatively charged compounds, and inhibits agglutination by the contact activation of the intrinsic blood coagulation pathway. β2-GP1 causes a reduction of the prothrombinase binding sites on platelets and reduces the activation caused by collagen when thrombin is present at physiological serum concentrations of β2-GP1 suggesting a regulatory role of β2-GP1 in coagulation. β2-GP1 also inhibits the generation of factor Xa in the presence of platelets. β2-GP1 also inhibits that activation of factor XIIa. In addition, β2-GP1 inhibits the activation of protein C blocking its activity on phosphatidylserine:phosphatidylcholine vesicles however once protein C is activated, Apo-H fails to inhibit activity. Since protein C is involved in factor Va degradation Apo-H in
https://en.wikipedia.org/wiki/Mosco%20convergence
In mathematical analysis, Mosco convergence is a notion of convergence for functionals that is used in nonlinear analysis and set-valued analysis. It is a particular case of Γ-convergence. Mosco convergence is sometimes phrased as “weak Γ-liminf and strong Γ-limsup” convergence since it uses both the weak and strong topologies on a topological vector space X. In finite dimensional spaces, Mosco convergence coincides with epi-convergence, while in infinite-dimensional ones, Mosco convergence is strictly stronger property. Mosco convergence is named after Italian mathematician Umberto Mosco, a current Harold J. Gay professor of mathematics at Worcester Polytechnic Institute. Definition Let X be a topological vector space and let X∗ denote the dual space of continuous linear functionals on X. Let Fn : X → [0, +∞] be functionals on X for each n = 1, 2, ... The sequence (or, more generally, net) (Fn) is said to Mosco converge to another functional F : X → [0, +∞] if the following two conditions hold: lower bound inequality: for each sequence of elements xn ∈ X converging weakly to x ∈ X, upper bound inequality: for every x ∈ X there exists an approximating sequence of elements xn ∈ X, converging strongly to x, such that Since lower and upper bound inequalities of this type are used in the definition of Γ-convergence, Mosco convergence is sometimes phrased as “weak Γ-liminf and strong Γ-limsup” convergence. Mosco convergence is sometimes abbreviated to M-convergence and denoted by
https://en.wikipedia.org/wiki/Congruence%20lattice%20problem
In mathematics, the congruence lattice problem asks whether every algebraic distributive lattice is isomorphic to the congruence lattice of some other lattice. The problem was posed by Robert P. Dilworth, and for many years it was one of the most famous and long-standing open problems in lattice theory; it had a deep impact on the development of lattice theory itself. The conjecture that every distributive lattice is a congruence lattice is true for all distributive lattices with at most ℵ1 compact elements, but F. Wehrung provided a counterexample for distributive lattices with ℵ2 compact elements using a construction based on Kuratowski's free set theorem. Preliminaries We denote by Con A the congruence lattice of an algebra A, that is, the lattice of all congruences of A under inclusion. The following is a universal-algebraic triviality. It says that for a congruence, being finitely generated is a lattice-theoretical property. Lemma. A congruence of an algebra A is finitely generated if and only if it is a compact element of Con A. As every congruence of an algebra is the join of the finitely generated congruences below it (e.g., every submodule of a module is the union of all its finitely generated submodules), we obtain the following result, first published by Birkhoff and Frink in 1948. Theorem (Birkhoff and Frink 1948). The congruence lattice Con A of any algebra A is an algebraic lattice. While congruences of lattices lose something in comparison to groups, modules, rings (they cannot be identified with subsets of the universe), they also have a property unique among all the other structures encountered yet. Theorem (Funayama and Nakayama 1942). The congruence lattice of any lattice is distributive. This says that α ∧ (β ∨ γ) = (α ∧ β) ∨ (α ∧ γ), for any congruences α, β, and γ of a given lattice. The analogue of this result fails, for instance, for modules, as , as a rule, for submodules A, B, C of a given module. Soon after this result, Dilworth p
https://en.wikipedia.org/wiki/ChIP-on-chip
ChIP-on-chip (also known as ChIP-chip) is a technology that combines chromatin immunoprecipitation ('ChIP') with DNA microarray ("chip"). Like regular ChIP, ChIP-on-chip is used to investigate interactions between proteins and DNA in vivo. Specifically, it allows the identification of the cistrome, the sum of binding sites, for DNA-binding proteins on a genome-wide basis. Whole-genome analysis can be performed to determine the locations of binding sites for almost any protein of interest. As the name of the technique suggests, such proteins are generally those operating in the context of chromatin. The most prominent representatives of this class are transcription factors, replication-related proteins, like origin recognition complex protein (ORC), histones, their variants, and histone modifications. The goal of ChIP-on-chip is to locate protein binding sites that may help identify functional elements in the genome. For example, in the case of a transcription factor as a protein of interest, one can determine its transcription factor binding sites throughout the genome. Other proteins allow the identification of promoter regions, enhancers, repressors and silencing elements, insulators, boundary elements, and sequences that control DNA replication. If histones are subject of interest, it is believed that the distribution of modifications and their localizations may offer new insights into the mechanisms of regulation. One of the long-term goals ChIP-on-chip was designed for is to establish a catalogue of (selected) organisms that lists all protein-DNA interactions under various physiological conditions. This knowledge would ultimately help in the understanding of the machinery behind gene regulation, cell proliferation, and disease progression. Hence, ChIP-on-chip offers both potential to complement our knowledge about the orchestration of the genome on the nucleotide level and information on higher levels of information and regulation as it is propagated by resea
https://en.wikipedia.org/wiki/David%20Goeddel
David V. Goeddel (born 1951) is an American molecular biologist who, employed at the time by Genentech, successfully used genetic engineering to coax bacteria into creating synthetic human insulin, human growth hormone, and human tissue plasminogen activator (tPA) for use in therapeutic medicine. Recruited by Bob Swanson in 1978, he was the first non-university scientist to be hired at Genentech, and the company's third employee. Goeddel became legendary in the biotechnology and molecular biology fields by cloning virtually all of Genentech's early products and/or processes, including synthetic insulin, growth hormone, and tPA, often beating out bigger and more established laboratories in the process. Besides being perhaps the single most important contributor to Genentech's rise to one of the nation's premier biotech companies, his extraordinary drive and competitive work ethic embodied Genentech's early "Clone or Die" culture Together with Steve McKnight and Robert Tjian, he founded Tularik in 1991, and was their president and CEO until Tularik was acquired by Amgen for $1.3 billion in 2004. Goeddel earned his bachelor's degree in chemistry from the University of California, San Diego, and his PhD in biochemistry from the University of Colorado, Boulder. He is a member of the National Academy of Sciences, and is a recipient of the Eli Lilly Award in Biological Chemistry and the Scheele Award from the Swedish Academy of Pharmaceutical Sciences. Personal life Goeddel has two sons who have played Major League Baseball, Erik and Tyler.
https://en.wikipedia.org/wiki/Simple%20Soap%20Binding%20Profile
Simple Soap Binding Profile (official abbreviation is SSBP) is a specification from the Web Services Interoperability industry consortium. It is intended as a support profile for the WS-I Basic Profile. This profile defines the way WSDL (Web Services Description Language) documents are to bind operations to a specific transport protocol SOAP. The Basic Profile 1.0 included the content and function of the Simple Soap Binding Profile. In other words, the Basic Profile 1.0 is roughly equivalent to the Basic Profile 1.1 plus the Simple Soap Binding Profile 1.0. Now that the Simple Soap Binding Profile is a separate profile, other WS-I documents can re-use (reference) it. External links WS-I SSBP 1.0 Official specifications Web service specifications Interoperability
https://en.wikipedia.org/wiki/Email%20appending
Email appending, also known as e-appending, is a marketing practice that involves taking known customer data (first name, last name, and postal address) and matching it against a vendor's database to obtain email addresses. The purpose is to grow one's email subscriber list with the intent of sending customers information via email instead of through traditional mail. Email appending is a controversial practice in the email marketing world, with critics claiming that sending email to people who never explicitly opted-in is against best practices. An email appending process involves either a business or consumer database made up of contacts including their name, address and company name [for business contacts]. If the company wants to expand into email communication, then they can involve a service provider that has a database of email addresses in order to merge the data and append business or consumer email addresses to their existing file. In this way they can have an updated database with the current email address of individuals on the list. The success of email appending depends on the quality of both databases being merged. Like other forms of Database marketing, marketing materials sent using e-pending may be considered spam. Mailers using appending by definition do not have consent of the individuals on their lists, since the individuals did not disclose their email addresses to begin with. Mail sent by appending methods therefore is Opt-out instead of Opt-in e-mail. In September 2011, The Messaging Anti-Abuse Working Group (MAAWG) released a position paper stating the practice of email appending is in direct violation to their values and is an abusive practice.
https://en.wikipedia.org/wiki/Oxbow%20code
In computer programming, oxbow code refers to fragments of program code that were once needed but which are now never used. Such code is typically formed when a program is modified, either when an item is superseded with a newer version but the old version is not removed, or when an item is removed or replaced, but the item's supporting code is not removed. Such code is normally removed unless sufficiently amusing or educational. Similarly, variables and data structures can be left around after the last code that used them has been removed, though this is more commonly called unused or unreferenced variables. The term is taken by analogy with oxbow lakes which are formed in nature when a bend in a river becomes so pronounced that the water breaks through from before the bend to after it, making the river straight again. When the sides of the new course silt up, a curved lake is left, disconnected from the main stream. Examples (from gnash/server/asobj/Global.cpp 1.46) static void as_global_escape(const fn_call& fn) { // List of chars we must convert to escape sequences const string strHexDigits = "0123456789ABCDEF"; string strInput = fn.arg(0).to_string(); URL::encode(strInput); fn.result->set_string(strInput.c_str()); } In this, "strHexDigits" is oxbow code (or oxbow data). See also Dead code Unreachable code Source code
https://en.wikipedia.org/wiki/Unreferenced%20variable
An unreferenced variable in the source code of a computer program is a variable that is defined but which is never used. This may result in a harmless waste of memory. Many compilers detect such variables and do not allocate storage for them (i.e., "optimize away" their storage), generally also issuing a warning as they do. Some coding guideline documents consider an unreferenced variable to be a symptom of a potential coding fault. On the other hand, unreferenced variables can be used as temporary placeholders to indicate further expected future developments in the code. Examples C: int main(void) { int i, j; for (i=0; i<10; i++) printf("%d", i); return 0; } In this example, j is an unreferenced variable.
https://en.wikipedia.org/wiki/Konzo
Konzo is an epidemic paralytic disease occurring among hunger-stricken rural populations in Africa where a diet dominated by insufficiently processed cassava results in simultaneous malnutrition and high dietary cyanide intake. Konzo was first described by Giovanni Trolli in 1938 who compiled the observations from eight doctors working in the Kwango area of the Belgian Congo (now Democratic Republic of the Congo). Signs and symptoms The onset of paralysis (spastic paraparesis) is sudden and symmetrical and affects the legs more than the arms. The resulting disability is permanent but does not progress. Typically, a patient is standing and walking on the balls of the feet with rigid legs and often with ankle clonus. Initially, most patients experience generalized weakness during the first days and are bedridden for some days or weeks before trying to walk. Occasional blurred vision and/or speech difficulties typically clear during the first month, except in severely affected patients. Spasticity is present from the first day, without any initial phase of flaccidity. After the initial weeks of functional improvement, the spastic paraparesis remains stable for the rest of life. Some patients may experience an abrupt aggravating episode, e.g. a sudden and permanent worsening of the spastic paraparesis. Such episodes are identical to the initial onset and can therefore be interpreted as a second onset. The severity of konzo varies; cases range from only hyperreflexia in the lower limbs to a severely disabled patient with spastic paraparesis, associated weakness of the trunk and arms, impaired eye movements, speech and possibly visual impairment. Although the severity varies from patient to patient, the longest upper motor neurons are invariably more affected than the shorter ones. Thus, a konzo patient with speech impairment always shows severe symptoms in the legs and arms. Recently, neuropsychological effects of konzo have been described from DR Congo. Cause The
https://en.wikipedia.org/wiki/Banach%27s%20matchbox%20problem
Banach's match problem is a classic problem in probability attributed to Stefan Banach. Feller says that the problem was inspired by a humorous reference to Banach's smoking habit in a speech honouring him by Hugo Steinhaus, but that it was not Banach who set the problem or provided an answer. Suppose a mathematician carries two matchboxes at all times: one in his left pocket and one in his right. Each time he needs a match, he is equally likely to take it from either pocket. Suppose he reaches into his pocket and discovers for the first time that the box picked is empty. If it is assumed that each of the matchboxes originally contained matches, what is the probability that there are exactly matches in the other box? Solution Without loss of generality consider the case where the matchbox in his right pocket has an unlimited number of matches and let be the number of matches removed from this one before the left one is found to be empty. When the left pocket is found to be empty, the man has chosen that pocket times. Then is the number of successes before failures in Bernoulli trials with , which has the negative binomial distribution and thus . Returning to the original problem, we see that the probability that the left pocket is found to be empty first is which equals because both are equally likely. We see that the number of matches remaining in the other pocket is . The expectation of the distribution is approximately . (This is shown using Stirling's approximation.) So starting with boxes with matches, the expected number of matches in the second box is . See also List of things named after Stefan Banach
https://en.wikipedia.org/wiki/Survive%20To%20Fight
Survive To Fight is the title of a British Army publication which details the use of NBC protective equipment and other procedures to be carried in the event of an attack with nuclear, biological or chemical weapons. So far five editions have been published (and two reprint runs); the first three of which are in the form of a ring-bound manual with a plastic cover, and the last, Edition 5, is a 82 page loose leaf TAM insert (6 hole ringbinder). Edition I (AC71388) covers the use of the S6 Respirator and Mk.III NBC suit and overboots, Edition II (1990) (reprinted in 1992 with colour photographs on cover) featuring the S10 Respirator and Mk.IV suit and overboots which were introduced since. Edition III with different colour photographs on cover (1995) (reprinted once with glue bound at the top (circa 1998)) is a revised version of II, and features the addition of the Mk.V overboots. Edition IV (Jan 2002) is a TAM sided bound booklet. The last edition carrying the title 'Survive to Fight' was issued in September 2005 (edition 5), though this carried Army Code 64358 (not a JSP). The title of the publication was changed to JSP926 'Counter CBRN Aide Memoire' in July 2012 to reflect different operating conditions, this booklet is in the same loose leaf TAM insert format. The publication continues in this TAM sized form and can also be downloaded for use on a phone or laptop. See also Basic Battle Skills Battlefield First Aid All-In Fighting
https://en.wikipedia.org/wiki/Georgia%20Radio%20Hall%20of%20Fame
The Georgia Radio Museum and Hall of Fame was a non-profit corporation that honored the men and women of radio broadcasting in the U.S. state of Georgia. It was founded in 2007. The museum's LaGrange location closed in August 2020 due to lack of interest. In 2021, the museum's physical collections were relocated to the Columbus Collective Museums in Columbus, Georgia. See also List of museums in Georgia (U.S. state)
https://en.wikipedia.org/wiki/Cell-free%20system
A cell-free system is an in vitro tool widely used to study biological reactions that happen within cells apart from a full cell system, thus reducing the complex interactions typically found when working in a whole cell. Subcellular fractions can be isolated by ultracentrifugation to provide molecular machinery that can be used in reactions in the absence of many of the other cellular components. Eukaryotic and prokaryotic cell internals have been used for creation of these simplified environments. These systems have enabled cell-free synthetic biology to emerge, providing control over what reaction is being examined, as well as its yield, and lessening the considerations otherwise invoked when working with more sensitive live cells. Types Cell-free systems may be divided into two primary classifications: cell extract-based, which remove components from within a whole cell for external use, and purified enzyme-based, which use purified components of the molecules known to be involved in a given process. The cell extract-based type are susceptible to problems like quick degradation of components outside their host, as shown in a study by Kitaoka et al. where a cell-free translation system based on Escherichia coli (E. coli), of the cell extract-based type, had the mRNA template degrade very quickly and led to the halt of protein synthesis. Preparation The methods of preparation vary between situations of both types of cell-free systems. Cell extract–based Nobel prize winner Eduard Buchner was arguably the first to present a cell-free system using yeast extracts, but since then alternative sources have been found. E. coli, wheat germ, and rabbit reticulocytes have all proven useful to create cell-free systems by extraction of their interior components. E. coli 30S extracts have been acquired, for example, by grinding the bacteria with alumina, followed by further cleaning. Similarly, wheat germ has been ground with acid-washed sand or powdered glass to open the ce
https://en.wikipedia.org/wiki/Epiphytic%20fungus
An epiphytic fungus is a fungus that grows upon, or attached to, a living plant. The term epiphytic derives from the Greek epi- (meaning 'upon') and phyton (meaning 'plant'). Examples Many examples of epiphytic microorganisms exist. The ergoline alkaloids found in Convolvulaceae are produced by a seed-transmitted epiphytic clavicipitaceous fungus . See also Epiphyte Endosymbiont Epilith, an organism that grows in a rock Epibiont, an organism that grows on another life form Epiphytic bacteria Mycorrhiza
https://en.wikipedia.org/wiki/Sterility%20assurance%20level
In microbiology, sterility assurance level (SAL) is the probability that a single unit that has been subjected to sterilization nevertheless remains nonsterile. It is never possible to prove that all organisms have been destroyed, as the likelihood of survival of an individual microorganism is never zero. So SAL is used to express the probability of the survival. For example, medical device manufacturers design their sterilization processes for an extremely low SAL, such as 10−6, which is a 1 in 1,000,000 chance of a non-sterile unit. SAL also describes the killing efficacy of a sterilization process. A very effective sterilization process has a very low SAL. Terminology Mathematically, SALs are probabilities, often very small but (by definition) always lying between zero and one. So when they are expressed in scientific notation their exponents are negative, as for instance, "The SAL of this process is 10−6". But the term SAL is sometimes also used to refer to a sterilization's efficacy. This usage (technically the multiplicative inverse) results in positive exponents, as in "The SAL of this process is 106". To avoid ambiguity from these inverse usages, some authors use the term log reduction (e.g., "This process gives a six-log reduction"). SALs can also be used to describe the microbial population that was destroyed by the sterilization process, though this is not the same as the probabilistic definition. What is often called a "log reduction" (technically a reduction by one order of magnitude) represents a 90% reduction in microbial population. Thus a process that achieves a "6-log reduction" (10−6) will theoretically reduce an initial population of one million organisms to very close to zero. The difference in meaning between this and the probabilistic sense can be seen from an example: if careful assays before and after indicate that a procedure has inactivated 90% of the biological agents in some unit, then the procedure can be correctly reported to have
https://en.wikipedia.org/wiki/Naxi%20script
The Naxi language of southwestern China may be written in the syllabic geba script. There is also a Naxi tradition of pictographic symbols called dongba; this may sometimes be glossed with geba for clarification, since a dongba text may be intelligible only to its author. A Latin alphabet was developed for Naxi in the 20th century. External links Dr. Richard S. Cook, Naxi Pictographic and Syllabographic Scripts: Research notes toward a Unicode encoding of Naxi Naxi Manuscript Collection at the Library of Congress Naxi scripts at Omniglot World Digital Library presentation of NZD185: Romance and Love-Related Ceremonies. Library of Congress. Primary source 19th and 20th century manuscripts from the Naxi people, Yunnan Province, China; only pictographic writing system still in use anywhere in the world. Writing systems Naxi language
https://en.wikipedia.org/wiki/OmniPop
OmniPop is a program used to class populations by autosomal DNA results. It is a Microsoft Excel file and requires Excel to run. The program is recognized and used by NIST for the purpose of clustering autosomal markers and is also suggested by commercial genealogical genetics companies to their customers for use in understanding their results.
https://en.wikipedia.org/wiki/Thermophysics
Thermophysics is the application of thermodynamics to geophysics and to planetary science more broadly. It may also be used to refer to the field of thermodynamic and transport properties. Remote sensing Earth thermophysics is a branch of geophysics that uses the naturally occurring surface temperature as a function of the cyclical variation in solar radiation to characterise planetary material properties. Thermophysical properties are characteristics that control the diurnal, seasonal, or climatic surface and subsurface temperature variations (or thermal curves) of a material. The most important thermophysical property is thermal inertia, which controls the amplitude of the thermal curve and albedo (or reflectivity), which controls the average temperature. This field of observations and computer modeling was first applied to Mars due to the ideal atmospheric pressure for characterising granular materials based upon temperature. The Mariner 6, Mariner 7, and Mariner 9 spacecraft carried thermal infrared radiometers, and a global map of thermal inertia was produced from modeled surface temperatures collected by the Infrared Thermal Mapper Instruments (IRTM) on board the Viking 1 and 2 Orbiters. The original thermophysical models were based upon the studies of lunar temperature variations. Further development of the models for Mars included surface-atmosphere energy transfer, atmospheric back-radiation, surface emissivity variations, CO2 frost and blocky surfaces, variability of atmospheric back-radiation, effects of a radiative-convective atmosphere, and single-point temperature observations.
https://en.wikipedia.org/wiki/Photoelectric%20sensor
A photoelectric sensor is a device used to determine the distance, absence, or presence of an object by using a light transmitter, often infrared, and a photoelectric receiver. They are largely used in industrial manufacturing. There are three different useful types: opposed (through-beam), retro-reflective, and proximity-sensing (diffused). Types A self-contained photoelectric sensor contains the optics, along with the electronics. It requires only a power source. The sensor performs its own modulation, demodulation, amplification, and output switching. Some self-contained sensors provide such options as built-in control timers or counters. Because of technological progress, self-contained photoelectric sensors have become increasingly smaller. Remote photoelectric sensors used for remote sensing contain only the optical components of a sensor. The circuitry for power input, amplification, and output switching is located elsewhere, typically in a control panel. This allows the sensor, itself, to be very small. Also, the controls for the sensor are more accessible, since they may be bigger. When space is restricted or the environment too hostile even for remote sensors, fibre optics may be used. Fibre optics are passive mechanical sensing components. They may be used with either remote or self-contained sensors. They have no electrical circuitry and no moving parts, and can safely pipe light into and out of hostile environments. Sensing modes A through-beam arrangement consists of a receiver located within the line-of-sight of the transmitter. In this mode, an object is detected when the light beam is blocked from getting to the receiver from the transmitter. A retroreflective arrangement places the transmitter and receiver at the same location and uses a reflector to bounce the inverted light beam back from the transmitter to the receiver. An object is sensed when the beam is interrupted and fails to reach the receiver. A proximity-sensing (diffused) a
https://en.wikipedia.org/wiki/Low%20%28computability%29
In computability theory, a Turing degree [X] is low if the Turing jump [X′] is 0′. A set is low if it has low degree. Since every set is computable from its jump, any low set is computable in 0′, but the jump of sets computable in 0′ can bound any degree recursively enumerable in 0′ (Schoenfield Jump Inversion). X being low says that its jump X′ has the least possible degree in terms of Turing reducibility for the jump of a set. There are various related properties to low degrees: A degree is lown if its n'th jump is the n'th jump of 0. A set X is generalized low if it satisfies X′ ≡T X + 0′, that is: if its jump has the lowest degree possible. A degree d is generalized low n if its n'th jump is the (n-1)'st jump of the join of d with 0′. More generally, properties of sets which describe their being computationally weak (when used as a Turing oracle) are referred to under the umbrella term lowness properties. By the Low basis theorem of Jockusch and Soare, any nonempty class in contains a set of low degree. This implies that, although low sets are computationally weak, they can still accomplish such feats as computing a completion of Peano Arithmetic. In practice, this allows a restriction on the computational power of objects needed for recursion theoretic constructions: for example, those used in the analyzing the proof-theoretic strength of Ramsey's theorem. See also High (computability) Low Basis Theorem
https://en.wikipedia.org/wiki/High%20%28computability%29
In computability theory, a Turing degree [X] is high if it is computable in 0, and the Turing jump [] is 0, which is the greatest possible degree in terms of Turing reducibility for the jump of a set which is computable in 0. Similarly, a degree is high n if its n'th jump is the (n+1)'st jump of 0. Even more generally, a degree d is generalized high n if its n'th jump is the n'th jump of the join of d with 0. See also Low (computability)
https://en.wikipedia.org/wiki/Low%20basis%20theorem
The low basis theorem is one of several basis theorems in computability theory, each of which showing that, given an infinite subtree of the binary tree , it is possible to find an infinite path through the tree with particular computability properties. The low basis theorem, in particular, shows that there must be a path which is low; that is, the Turing jump of the path is Turing equivalent to the halting problem . Statement and proof The low basis theorem states that every nonempty class in (see arithmetical hierarchy) contains a set of low degree (Soare 1987:109). This is equivalent, by definition, to the statement that each infinite computable subtree of the binary tree has an infinite path of low degree. The proof uses the method of forcing with classes (Cooper 2004:330). Hájek and Kučera (1989) showed that the low basis is provable in the formal system of arithmetic known as . The forcing argument can also be formulated explicitly as follows. For a set X⊆ω, let f(X) = Σ{i}(X)↓2−i, where {i}(X)↓ means that Turing machine i halts on X (with the sum being over all such i). Then, for every nonempty (lightface) S⊆2ω, the (unique) X∈S minimizing f(X) has a low Turing degree. To see this, {i}(X)↓ ⇔ ∀Y∈S ({i}(Y)↓ ∨ ∃j<i ({j}(Y)↓ ∧ ¬{j}(X)↓)), which can be computed from 0′ by induction on i; note that ∀Y∈S φ(Y) is for φ. In other words, whether a machine halts on X is forced by a finite condition, with allows for X′ = 0′. Application One application of the low basis theorem is to construct completions of effective theories so that the completions have low Turing degree. For example, the low basis theorem implies the existence of PA degrees strictly below .
https://en.wikipedia.org/wiki/Robert%20Koch%20Medal%20and%20Award
The Robert Koch Medal and Award are two prizes awarded annually by the German for excellence in the biomedical sciences. These awards grew out of early attempts by German physician Robert Koch to generate funding to support his research into the cause and cure for tuberculosis. Koch discovered the bacteria (Mycobacterium tuberculosis) responsible for the dreaded disease and rapidly acquired international support, including 500,000 gold marks from the Scottish-American philanthropist Andrew Carnegie. The Robert Koch Prize Since 1970, the Robert Koch Foundation has awarded prizes for major advances in the biomedical sciences, particularly in the fields of microbiology and immunology. The prestige of this award has grown over the past decades so that it is now widely regarded as the leading international scientific prize in microbiology. As has been described by a jury member for the prize, the committee often asks, "What would Robert Koch work on today?” to decide on research that should be granted recognition. The more specific Robert Koch Prize is commonly considered one of the stepping-stones (along with other prizes such as the Lasker Award) to eventual Nobel Prize recognition for scientists in the fields of microbiology and immunology, and a number of Robert Koch Prize winners subsequently became Nobel laureates, such as César Milstein, Susumu Tonegawa and Harald zur Hausen. Other notable awardees include Albert Sabin, Jonas Salk and John Enders for their pioneering work on the development of polio vaccines. Only Enders was recognized with a Nobel Prize, together with Thomas Huckle Weller and Frederick Chapman Robbins. Two separate Robert Koch Awards are presented annually: The Gold Robert Koch Gold Medal for accumulated excellence in biomedical research and the Robert Koch Prize, worth €120,000, for a major discovery in biomedical science. Robert Koch Prize Winners since 1960 Source: 1960 (Germany), René Dubos (USA), (Japan), (Germany), (German
https://en.wikipedia.org/wiki/Nuclear%20emulsion
A nuclear emulsion plate is a type of particle detector first used in nuclear and particle physics experiments in the early decades of the 20th century. It is a modified form of photographic plate that can be used to record and investigate fast charged particles like alpha-particles, nucleons, leptons or mesons. After exposing and developing the emulsion, single particle tracks can be observed and measured using a microscope. Description The nuclear emulsion plate is a modified form of photographic plate, coated with a thicker photographic emulsion of gelatine containing a higher concentration of very fine silver halide grains; the exact composition of the emulsion being optimised for particle detection. It has the advantage of extremely high spatial precision, limited only by the size of the silver halide grains (a few microns), a precision that surpasses even the best of modern particle detectors (observe the scale in the image below, of K-meson decay). A stack of emulsion plates can record and preserve the interactions of particles so that their trajectories are recorded in 3-dimensional space as a trail of silver-halide grains, which can be viewed from any aspect on a microscopic scale. In addition, the emulsion plate is an integrating device that can be exposed or irradiated until the desired amount of data has been accumulated. It is compact, with no associated read-out cables or electronics, allowing the plates to be installed in very confined spaces and, compared to other detector technologies, is significantly less expensive to manufacture, operate and maintain. These features were decisive in enabling the high-altitude, mountain and balloon based studies of cosmic rays that led to the discovery of the pi-meson and parity violation in K-meson decays; shedding light on the true nature and extent of the subnuclear "particle zoo", defining a milestone in the development of modern experimental particle physics. The chief disadvantage of nuclear emulsion
https://en.wikipedia.org/wiki/Localized%20molecular%20orbitals
Localized molecular orbitals are molecular orbitals which are concentrated in a limited spatial region of a molecule, such as a specific bond or lone pair on a specific atom. They can be used to relate molecular orbital calculations to simple bonding theories, and also to speed up post-Hartree–Fock electronic structure calculations by taking advantage of the local nature of electron correlation. Localized orbitals in systems with periodic boundary conditions are known as Wannier functions. Standard ab initio quantum chemistry methods lead to delocalized orbitals that, in general, extend over an entire molecule and have the symmetry of the molecule. Localized orbitals may then be found as linear combinations of the delocalized orbitals, given by an appropriate unitary transformation. In the water molecule for example, ab initio calculations show bonding character primarily in two molecular orbitals, each with electron density equally distributed among the two O-H bonds. The localized orbital corresponding to one O-H bond is the sum of these two delocalized orbitals, and the localized orbital for the other O-H bond is their difference; as per Valence bond theory. For multiple bonds and lone pairs, different localization procedures give different orbitals. The Boys and Edmiston-Ruedenberg localization methods mix these orbitals to give equivalent bent bonds in ethylene and rabbit ear lone pairs in water, while the Pipek-Mezey method preserves their respective σ and π symmetry. Equivalence of localized and delocalized orbital descriptions For molecules with a closed electron shell, in which each molecular orbital is doubly occupied, the localized and delocalized orbital descriptions are in fact equivalent and represent the same physical state. It might seem, again using the example of water, that placing two electrons in the first bond and two other electrons in the second bond is not the same as having four electrons free to move over both bonds. However, in quantu
https://en.wikipedia.org/wiki/Depth%20%28ring%20theory%29
In commutative and homological algebra, depth is an important invariant of rings and modules. Although depth can be defined more generally, the most common case considered is the case of modules over a commutative Noetherian local ring. In this case, the depth of a module is related with its projective dimension by the Auslander–Buchsbaum formula. A more elementary property of depth is the inequality where denotes the Krull dimension of the module . Depth is used to define classes of rings and modules with good properties, for example, Cohen-Macaulay rings and modules, for which equality holds. Definition Let be a commutative ring, an ideal of and a finitely generated -module with the property that is properly contained in . (That is, some elements of are not in .) Then the -depth of , also commonly called the grade of , is defined as By definition, the depth of a local ring with a maximal ideal is its -depth as a module over itself. If is a Cohen-Macaulay local ring, then depth of is equal to the dimension of . By a theorem of David Rees, the depth can also be characterized using the notion of a regular sequence. Theorem (Rees) Suppose that is a commutative Noetherian local ring with the maximal ideal and is a finitely generated -module. Then all maximal regular sequences for , where each belongs to , have the same length equal to the -depth of . Depth and projective dimension The projective dimension and the depth of a module over a commutative Noetherian local ring are complementary to each other. This is the content of the Auslander–Buchsbaum formula, which is not only of fundamental theoretical importance, but also provides an effective way to compute the depth of a module. Suppose that is a commutative Noetherian local ring with the maximal ideal and is a finitely generated -module. If the projective dimension of is finite, then the Auslander–Buchsbaum formula states Depth zero rings A commutative Noetherian local ri
https://en.wikipedia.org/wiki/Immune%20receptor
An immune receptor (or immunologic receptor) is a receptor, usually on a cell membrane, which binds to a ligand (usually another protein, such as cytokine) and causes a response in the immune system. Types The main receptors in the immune system are pattern recognition receptors (PRRs), Toll-like receptors (TLRs), killer activated and killer inhibitor receptors (KARs and KIRs), complement receptors, Fc receptors, B cell receptors and T cell receptors. See also Antigen
https://en.wikipedia.org/wiki/Disodium%20ribonucleotides
Disodium 5'-ribonucleotides or I+G, E number E635, is a flavor enhancer which is synergistic with glutamates in creating the taste of umami. It is a mixture of disodium inosinate (IMP) and disodium guanylate (GMP) and is often used where a food already contains natural glutamates (as in meat extract) or added monosodium glutamate (MSG). It is primarily used in flavored noodles, snack foods, chips, crackers, sauces and fast foods. It is produced by combining the sodium salts of the natural compounds guanylic acid (E626) and inosinic acid (E630). A mixture composed of 98% monosodium glutamate and 2% E635 has four times the flavor enhancing power of monosodium glutamate (MSG) alone. Side effects and safety Disodium 5'-ribonucleotides were first assessed in 1974 by the Joint FAO/WHO Expert Committee on Food Additives based on all available scientific literature. This assessment resulted in a new specification prepared and an "ADI Not Specified". This essentially means that this additive shows no toxicology at any level and acceptable daily limits do not need to be set. The definition is as follows: In 1993 the Joint FAO/WHO Expert Committee on Food Additives considered several more studies on this food additive and retained the "ADI not specified" safety classification. See also List of food additives Ribonucleotide
https://en.wikipedia.org/wiki/Costotransverse%20joint
The costotransverse joint is the joint formed between the facet of the tubercle of the rib and the adjacent transverse process of a thoracic vertebra. The costotransverse joint is a plane type of synovial joint which, under physiological conditions, allows only gliding movement. This costotransverse joint is present in all but the eleventh and twelfth ribs. The first ten ribs have two joints in close proximity posteriorly; the costovertebral joints and the costotransverse joints. This arrangement restrains the motion of the ribs allowing them to work in a parallel fashion during breathing. If a typical rib had only one joint posteriorly the resultant swivel action would allow a rib to be non-parallel with respect to the neighboring ribs making for a very inefficient breathing. Anatomy Ligaments The ligaments of the joint are: Costotransverse ligament Lateral costotransverse ligament (Anterior and posterior) superior costotransverse ligament Accessory ligament - typically present. It is medial to the superior costotransverse ligament, with the dorsal ramus of a thoracic spinal nerve and associated vessels intervening between the two. Its attachments are variable. The ligaments limit the movements of the joint to slight gliding. Innervation The intercostal nerves innervate the costotransverse joints. Therefore, therapeutic medial branch blocks are ineffectual.
https://en.wikipedia.org/wiki/R-type%20calcium%20channel
The R-type calcium channel is a type of voltage-dependent calcium channel. Like the others of this class, the α1 subunit forms the pore through which calcium enters the cell and determines most of the channel's properties. This α1 subunit is also known as the calcium channel, voltage-dependent, R type, alpha 1E subunit (CACNA1E) or Cav2.3 which in humans is encoded by the CACNA1E gene. They are strongly expressed in cortex, hippocampus, striatum, amygdala and interpeduncular nucleus. They are poorly understood, but like Q-type calcium channels, they appear to be present in cerebellar granule cells. They have a high threshold of activation and relatively slow kinetics.
https://en.wikipedia.org/wiki/Q-type%20calcium%20channel
The Q-type calcium channel is a type of voltage-dependent calcium channel. Like the others of this class, the α1 subunit is the one that determines most of the channel's properties. They are poorly understood, but like R-type calcium channels, they appear to be present in cerebellar granule cells. They have a high threshold of activation and relatively slow kinetics. External links Ion channels Electrophysiology Membrane biology Integral membrane proteins Calcium channels
https://en.wikipedia.org/wiki/P-type%20calcium%20channel
The P-type calcium channel is a type of voltage-dependent calcium channel. Similar to many other high-voltage-gated calcium channels, the α1 subunit determines most of the channel's properties. The 'P' signifies cerebellar Purkinje cells, referring to the channel's initial site of discovery. P-type calcium channels play a similar role to the N-type calcium channel in neurotransmitter release at the presynaptic terminal and in neuronal integration in many neuronal types. History The calcium channel experiments that led to the discovery of P-type calcium channels were initially completed by Llinás and Sugimori in 1980. P type calcium channels were named in 1989 because they were discovered within mammalian Purkinje neurons. They were able to use an in vitro preparation to examine the ionic currents that account for Purkinje cells' electrophysiological properties. They found that there are calcium dependent action potentials which rise slowly and fall quickly then undergo hyperpolarization. The action potentials were voltage dependent and the afterhyperpolarizing potentials were connected to the spike bursts, located within the dendrites of the Purkinje cells. Without calcium flux in the Purkinje cells, action potentials fire sporadically at a high frequency. Basic features and structure P-type calcium channels are voltage-dependent calcium channels that are classified under the high voltage activated class channel, along with L-, N-, Q- and R-type channels. These channels require a strong depolarization in order to be activated. They are found at axon terminals, as well as in somatodendritic areas of neurons within the central and peripheral nervous system. P-type calcium channels are also critical to vesicle release, specifically neurotransmitters and hormones at synaptic terminals of excitatory and inhibitory synapses. Voltage-gated P-type calcium channels consist of a main pore-forming α1 subunit (which is more specifically referred to as CaV2.1), an α2δ subuni
https://en.wikipedia.org/wiki/N-type%20calcium%20channel
N-type calcium channels also called Cav2.2 channels are voltage gated calcium channels that are localized primarily on the nerve terminals and dendrites as well as neuroendocrine cells. The calcium N-channel consists of several subunits: the primary subunit α1B and the auxiliary subunits α2δ and β. The α1B subunit forms the pore through which the calcium enters and helps to determine most of the channel's properties. These channels play an important role in the neurotransmission during development. In the adult nervous system, N-type calcium channels are critically involved in the release of neurotransmitters, and in pain pathways. N-type calcium channels are the target of ziconotide, the drug prescribed to relieve intractable cancer pain. There are many known N-type calcium channel blockers that function to inhibit channel activity, although the most notable blockers are ω-conotoxins. Structure N-type calcium channels are categorized as high threshold-activated channels and seen in the Cav2 gene family. The structure of the N-type calcium channel is very similar to other voltage-dependent channels. The most important part of the channel is the actual pore that is formed by the α1B subunit. This pore is the location of the import of the extracellular ions. The α1B subunit has as many as 2000 amino acid residues within an amino acid sequence with the transmembrane structure with a pore. This is organized into 6 six segments(S1-S6). S1, S2, S3, S5, and S6 are hydrophobic while S4 serves as the voltage-sensor. In addition there is a membrane-associated loop in between S5 and S6. The activity of the pore is modulated by 4 subunits: an intracellular β-subunit, a transmembrane gamma subunit, and complex of alpha-2 and delta subunits. In addition to the α1B subunit encoded by CACNA1B gene, the following auxiliary subunits are present in the N-type calcium channel: α2δ – encoded by either one of two genes CACNA2D1, CACNA2D2 β – encoded by either one of four genes CACN
https://en.wikipedia.org/wiki/L-type%20calcium%20channel
The L-type calcium channel (also known as the dihydropyridine channel, or DHP channel) is part of the high-voltage activated family of voltage-dependent calcium channel. "L" stands for long-lasting referring to the length of activation. This channel has four isoforms: Cav1.1, Cav1.2, Cav1.3, and Cav1.4. L-type calcium channels are responsible for the excitation-contraction coupling of skeletal, smooth, cardiac muscle, and for aldosterone secretion in endocrine cells of the adrenal cortex. They are also found in neurons, and with the help of L-type calcium channels in endocrine cells, they regulate neurohormones and neurotransmitters. They have also been seen to play a role in gene expression, mRNA stability, neuronal survival, ischemic-induced axonal injury, synaptic efficacy, and both activation and deactivation of other ion channels. In cardiac myocytes, the L-type calcium channel passes inward Ca2+ current (ICaL) and triggers calcium release from the sarcoplasmic reticulum by activating ryanodine receptor 2 (RyR2) (calcium-induced-calcium-release). Phosphorylation of these channels increases their permeability to calcium and increases the contractility of their respective cardiac myocytes. L-type calcium channel blocker drugs are used as cardiac antiarrhythmics or antihypertensives, depending on whether the drugs have higher affinity for the heart (the phenylalkylamines, like verapamil), or for the blood vessels (the dihydropyridines, like nifedipine). In skeletal muscle, there is a very high concentration of L-type calcium channels, situated in the T-tubules. Muscle depolarization results in large gating currents, but anomalously low calcium flux, which is now explained by the very slow activation of the ionic currents. For this reason, little or no Ca2+ passes across the T-tubule membrane during a single action potential. History In 1953, Paul Fatt and Bernard Katz discovered voltage gated calcium channels in crustacean muscle. The channels exhibited diff
https://en.wikipedia.org/wiki/Hormone%20antagonist
For the use of hormone antagonists in cancer, see hormonal therapy (oncology) A hormone antagonist is a specific type of receptor antagonist which acts upon hormone receptors. Such pharmaceutical drugs are used in antihormone therapy. External links Hormonal agents Receptor antagonists
https://en.wikipedia.org/wiki/Bisulfite%20sequencing
Bisulfite sequencing (also known as bisulphite sequencing) is the use of bisulfite treatment of DNA before routine sequencing to determine the pattern of methylation. DNA methylation was the first discovered epigenetic mark, and remains the most studied. In animals it predominantly involves the addition of a methyl group to the carbon-5 position of cytosine residues of the dinucleotide CpG, and is implicated in repression of transcriptional activity. Treatment of DNA with bisulfite converts cytosine residues to uracil, but leaves 5-methylcytosine residues unaffected. Therefore, DNA that has been treated with bisulfite retains only methylated cytosines. Thus, bisulfite treatment introduces specific changes in the DNA sequence that depend on the methylation status of individual cytosine residues, yielding single-nucleotide resolution information about the methylation status of a segment of DNA. Various analyses can be performed on the altered sequence to retrieve this information. The objective of this analysis is therefore reduced to differentiating between single nucleotide polymorphisms (cytosines and thymidine) resulting from bisulfite conversion (Figure 1). Methods Bisulfite sequencing applies routine sequencing methods on bisulfite-treated genomic DNA to determine methylation status at CpG dinucleotides. Other non-sequencing strategies are also employed to interrogate the methylation at specific loci or at a genome-wide level. All strategies assume that bisulfite-induced conversion of unmethylated cytosines to uracil is complete, and this serves as the basis of all subsequent techniques. Ideally, the method used would determine the methylation status separately for each allele. Alternative methods to bisulfite sequencing include Combined Bisulphite Restriction Analysis and methylated DNA immunoprecipitation (MeDIP). Methodologies to analyze bisulfite-treated DNA are continuously being developed. To summarize these rapidly evolving methodologies, numero
https://en.wikipedia.org/wiki/Biomolecular%20Object%20Network%20Databank
The Biomolecular Object Network Databank is a bioinformatics databank containing information on small molecule structures and interactions. The databank integrates a number of existing databases to provide a comprehensive overview of the information currently available for a given molecule. Background The Blueprint Initiative started as a research program in the lab of Dr. Christopher Hogue at the Samuel Lunenfeld Research Institute at Mount Sinai Hospital in Toronto. On December 14, 2005, Unleashed Informatics Limited acquired the commercial rights to The Blueprint Initiative intellectual property. This included rights to the protein interaction database BIND, the small molecule interaction database SMID, as well as the data warehouse SeqHound. Unleashed Informatics is a data management service provider and is overseeing the management and curation of The Blueprint Initiative under the guidance of Dr. Hogue. Construction BOND integrates the original Blueprint Initiative databases as well as other databases, such as Genbank, combined with many tools required to analyze these data. Annotation links for sequences, including taxon identifiers, redundant sequences, Gene Ontology descriptions, Online Mendelian Inheritance in Man identifiers, conserved domains, data base cross-references, LocusLink Identifiers and complete genomes are also available. BOND facilitates cross-database queries and is an open access resource which integrates interaction and sequence data. Small Molecule Interaction Database (SMID) The Small Molecule Interaction Database is a database containing protein domain-small molecule interactions. It uses a domain-based approach to identify domain families, found in the Conserved Domain Database (CDD), which interact with a query small molecule. The CDD from NCBI amalgamates data from several different sources; Protein FAMilies (PFAM), Simple Modular Architecture Research Tool (SMART), Cluster of Orthologous Genes (COGs), and NCBI's own curated se
https://en.wikipedia.org/wiki/Beale%20code
In geography and demography, a Beale code is the Rural-Urban Continuum Coding system originally developed by David L. Brown and later popularized by Calvin Beale at the United States Department of Agriculture in 1975. The Beale code system now is used by many other countries, such as Canada.
https://en.wikipedia.org/wiki/Granin
Granin (chromogranin and secretogranin) is a protein family of regulated secretory proteins ubiquitously found in the cores of amine and peptide hormone and neurotransmitter dense-core secretory vesicles. Function Granins (chromogranins or secretogranins) are acidic proteins and are present in the secretory granules of a wide variety of endocrine and neuro-endocrine cells. The exact function(s) of these proteins is not yet settled but there is evidence that granins function as pro-hormones, giving rise to an array of peptide fragments for which autocrine, paracrine, and endocrine activities have been demonstrated in vitro and in vivo. The intracellular biochemistry of granins includes binding of Ca2+, ATP and catecholamines (epinephrine, norepinephrine) within the hormone storage vesicle core. There is also evidence that CgA, and perhaps other granins, regulate the biogenesis of dense-core secretory vesicles and hormone sequestration in neuroendocrine cells. Structure Apart from their subcellular location and the abundance of acidic residues (Asp and Glu), these proteins do not share many structural similarities. Only one short region, located in the C-terminal section, is conserved in all these proteins. Chromogranins and secretogranins together share a C-terminal motif, whereas chromogranins A and B share a region of high similarity in their N-terminal section; this region includes two cysteine residues involved in a disulfide bond. There are considerable differences in the amino acid composition between different animals. Commercial assays for measuring human CGA can usually not be used for measuring CGA in samples from other species. Some specific parts of the molecule have a higher degree of amino acid homology and methods where the antibodies are directed against specific epitopes can be used to measure samples from different animals. Region-specific assays measuring defined parts of CGA, CGB and SG2 can be used for measurements in samples from cats an
https://en.wikipedia.org/wiki/Fra%C5%88kov%C3%A1%E2%80%93Helly%20selection%20theorem
In mathematics, the Fraňková–Helly selection theorem is a generalisation of Helly's selection theorem for functions of bounded variation to the case of regulated functions. It was proved in 1991 by the Czech mathematician Dana Fraňková. Background Let X be a separable Hilbert space, and let BV([0, T]; X) denote the normed vector space of all functions f : [0, T] → X with finite total variation over the interval [0, T], equipped with the total variation norm. It is well known that BV([0, T]; X) satisfies the compactness theorem known as Helly's selection theorem: given any sequence of functions (fn)n∈N in BV([0, T]; X) that is uniformly bounded in the total variation norm, there exists a subsequence and a limit function f ∈ BV([0, T]; X) such that fn(k)(t) converges weakly in X to f(t) for every t ∈ [0, T]. That is, for every continuous linear functional λ ∈ X*, Consider now the Banach space Reg([0, T]; X) of all regulated functions f : [0, T] → X, equipped with the supremum norm. Helly's theorem does not hold for the space Reg([0, T]; X): a counterexample is given by the sequence One may ask, however, if a weaker selection theorem is true, and the Fraňková–Helly selection theorem is such a result. Statement of the Fraňková–Helly selection theorem As before, let X be a separable Hilbert space and let Reg([0, T]; X) denote the space of regulated functions f : [0, T] → X, equipped with the supremum norm. Let (fn)n∈N be a sequence in Reg([0, T]; X) satisfying the following condition: for every ε > 0, there exists some Lε > 0 so that each fn may be approximated by a un ∈ BV([0, T]; X) satisfying and where |-| denotes the norm in X and Var(u) denotes the variation of u, which is defined to be the supremum over all partitions of [0, T]. Then there exists a subsequence and a limit function f ∈ Reg([0, T]; X) such that fn(k)(t) converges weakly in X to f(t) for every t ∈ [0, T]. That is, for every continuous linear functional λ ∈ X*,
https://en.wikipedia.org/wiki/Regulated%20function
In mathematics, a regulated function, or ruled function, is a certain kind of well-behaved function of a single real variable. Regulated functions arise as a class of integrable functions, and have several equivalent characterisations. Regulated functions were introduced by Nicolas Bourbaki in 1949, in their book "Livre IV: Fonctions d'une variable réelle". Definition Let X be a Banach space with norm || - ||X. A function f : [0, T] → X is said to be a regulated function if one (and hence both) of the following two equivalent conditions holds true: for every t in the interval [0, T], both the left and right limits f(t−) and f(t+) exist in X (apart from, obviously, f(0−) and f(T+)); there exists a sequence of step functions φn : [0, T] → X converging uniformly to f (i.e. with respect to the supremum norm || - ||∞). It requires a little work to show that these two conditions are equivalent. However, it is relatively easy to see that the second condition may be re-stated in the following equivalent ways: for every δ > 0, there is some step function φδ : [0, T] → X such that f lies in the closure of the space Step([0, T]; X) of all step functions from [0, T] into X (taking closure with respect to the supremum norm in the space B([0, T]; X) of all bounded functions from [0, T] into X). Properties of regulated functions Let Reg([0, T]; X) denote the set of all regulated functions f : [0, T] → X. Sums and scalar multiples of regulated functions are again regulated functions. In other words, Reg([0, T]; X) is a vector space over the same field K as the space X; typically, K will be the real or complex numbers. If X is equipped with an operation of multiplication, then products of regulated functions are again regulated functions. In other words, if X is a K-algebra, then so is Reg([0, T]; X). The supremum norm is a norm on Reg([0, T]; X), and Reg([0, T]; X) is a topological vector space with respect to the topology induced by the supremum norm. As noted
https://en.wikipedia.org/wiki/Divergent%20geometric%20series
In mathematics, an infinite geometric series of the form is divergent if and only if | r | ≥ 1. Methods for summation of divergent series are sometimes useful, and usually evaluate divergent geometric series to a sum that agrees with the formula for the convergent case This is true of any summation method that possesses the properties of regularity, linearity, and stability. Examples In increasing order of difficulty to sum: 1 − 1 + 1 − 1 + · · ·, whose common ratio is −1 1 − 2 + 4 − 8 + · · ·, whose common ratio is −2 1 + 2 + 4 + 8 + · · ·, whose common ratio is 2 1 + 1 + 1 + 1 + · · ·, whose common ratio is 1. Motivation for study It is useful to figure out which summation methods produce the geometric series formula for which common ratios. One application for this information is the so-called Borel-Okada principle: If a regular summation method sums Σzn to 1/(1 - z) for all z in a subset S of the complex plane, given certain restrictions on S, then the method also gives the analytic continuation of any other function on the intersection of S with the Mittag-Leffler star for f. Summability by region Open unit disk Ordinary summation succeeds only for common ratios |z| < 1. Closed unit disk Cesàro summation Abel summation Larger disks Euler summation Half-plane The series is Borel summable for every z with real part < 1. Any such series is also summable by the generalized Euler method (E, a) for appropriate a. Shadowed plane Certain moment constant methods besides Borel summation can sum the geometric series on the entire Mittag-Leffler star of the function 1/(1 − z), that is, for all z except the ray z ≥ 1. Everywhere Notes
https://en.wikipedia.org/wiki/Holonomic%20constraints
In classical mechanics, holonomic constraints are relations between the position variables (and possibly time) that can be expressed in the following form: where are n generalized coordinates that describe the system (in unconstrained configuration space). For example, the motion of a particle constrained to lie on the surface of a sphere is subject to a holonomic constraint, but if the particle is able to fall off the sphere under the influence of gravity, the constraint becomes non-holonomic. For the first case, the holonomic constraint may be given by the equation where is the distance from the centre of a sphere of radius , whereas the second non-holonomic case may be given by Velocity-dependent constraints (also called semi-holonomic constraints) such as are not usually holonomic. Holonomic system In classical mechanics a system may be defined as holonomic if all constraints of the system are holonomic. For a constraint to be holonomic it must be expressible as a function: i.e. a holonomic constraint depends only on the coordinates and maybe time . It does not depend on the velocities or any higher-order derivative with respect to t. A constraint that cannot be expressed in the form shown above is a nonholonomic constraint. Introduction As described above, a holonomic system is (simply speaking) a system in which one can deduce the state of a system by knowing only the change of positions of the components of the system over time, but not needing to know the velocity or in what order the components moved relative to each other. In contrast, a nonholonomic system is often a system where the velocities of the components over time must be known to be able to determine the change of state of the system, or a system where a moving part is not able to be bound to a constraint surface, real or imaginary. Examples of holonomic systems are gantry cranes, pendulums, and robotic arms. Examples of nonholonomic systems are Segways, unicycles, and automobiles. Ter
https://en.wikipedia.org/wiki/Field%20dominance
In video engineering, field dominance refers to the choice of which field of an interlaced video signal is chosen as the point at which video edits or switches occur. There are two main choices for field dominance: odd or even. With odd field dominance the edit or switch occurs at the start of the odd field. With even field dominance the edit or switch occurs at the start of the even field (some equipment, such as vision mixers or switchers allow the field dominance to be set to 'none' which means the switch will occur on the next field boundary after the switch has been pressed). Interlacing divides the frame into two fields, each containing half the number of lines. Each field is scanned in 1/60 second under the 525-line system (or 480i – often incorrectly referred to as NTSC) or 1/50 of a second under the 625-line system (or 576i – often incorrectly referred to as PAL). With interlaced systems there are an odd number of lines in each frame. This means that there is a half line offset between the fields, therefore the lines in the second field will be positionally interleaved with the lines in the first field. The lines are numbered in the order in which they are scanned (so it is incorrect to talk of the 'odd numbered lines' and the 'even numbered lines' when referring to interlaced video - but see PsF Line Numbers). In 525/60 systems, by convention, the first field in the frame is considered the even field. In 625/50 systems, by convention, the first field in the frame is considered the odd field. Selecting a consistent field dominance in vision switching and linear editing systems will maintain color framing synchronization. Re-editing old video material already edited with a different field dominance convention can be problematic, as it can lead to "flash fields" when old and new edits are made too close together. The term field dominance is often incorrectly used to refer to field order, particularly when referring to a field order error such as can occu
https://en.wikipedia.org/wiki/Euler%20summation
In the mathematics of convergent and divergent series, Euler summation is a summation method. That is, it is a method for assigning a value to a series, different from the conventional method of taking limits of partial sums. Given a series Σan, if its Euler transform converges to a sum, then that sum is called the Euler sum of the original series. As well as being used to define values for divergent series, Euler summation can be used to speed the convergence of series. Euler summation can be generalized into a family of methods denoted (E, q), where q ≥ 0. The (E, 1) sum is the ordinary Euler sum. All of these methods are strictly weaker than Borel summation; for q > 0 they are incomparable with Abel summation. Definition For some value y we may define the Euler sum (if it converges for that value of y) corresponding to a particular formal summation as: If all the formal sums actually converge, the Euler sum will equal the left hand side. However, using Euler summation can accelerate the convergence (this is especially useful for alternating series); sometimes it can also give a useful meaning to divergent sums. To justify the approach notice that for interchanged sum, Euler's summation reduces to the initial series, because This method itself cannot be improved by iterated application, as Examples Using y = 1 for the formal sum we get if Pk is a polynomial of degree k. Note that the inner sum would be zero for , so in this case Euler summation reduces an infinite series to a finite sum. The particular choice provides an explicit representation of the Bernoulli numbers, since (the Riemann zeta function). Indeed, the formal sum in this case diverges since k is positive, but applying Euler summation to the zeta function (or rather, to the related Dirichlet eta function) yields (cf. Globally convergent series) which is of closed form. With an appropriate choice of y (i.e. equal to or close to −) this series converges to . See also Binomial transform
https://en.wikipedia.org/wiki/Wait%20%28system%20call%29
In computer operating systems, a process (or task) may wait for another process to complete its execution. In most systems, a parent process can create an independently executing child process. The parent process may then issue a wait system call, which suspends the execution of the parent process while the child executes. When the child process terminates, it returns an exit status to the operating system, which is then returned to the waiting parent process. The parent process then resumes execution. Modern operating systems also provide system calls that allow a process's thread to create other threads and wait for them to terminate ("join" them) in a similar fashion. An operating system may provide variations of the wait call that allow a process to wait for any of its child processes to exit, or to wait for a single specific child process (identified by its process ID) to exit. Some operating systems issue a signal (SIGCHLD) to the parent process when a child process terminates, notifying the parent process and allowing it to retrieve the child process's exit status. The exit status returned by a child process typically indicates whether the process terminated normally or abnormally. For normal termination, this status also includes the exit code (usually an integer value) that the process returned to the system. During the first 20 years of UNIX, only the low 8 bits of the exit code have been available to the waiting parent. In 1989 with SVR4, a new call waitid has been introduced that returns all bits from the exit call in a structure called siginfo_t in the structure member si_status. Waitid is a mandatory part of the POSIX standard since 2001. Zombies and orphans When a child process terminates, it becomes a zombie process, and continues to exist as an entry in the system process table even though it is no longer an actively executing program. Under normal operation it will typically be immediately waited on by its parent, and then reaped by the syste
https://en.wikipedia.org/wiki/Panmixia
Panmixia (or panmixis) means uniform random fertilization. A panmictic population is one where all potential parents may contribute equally to the gamete pool, and that these gametes are uniformally distributed within the gamete population (gamodeme). This assumes that there are no hybridising restrictions within the parental population : neither Genetics, Cytogenetics nor behavioural; and neither spatial nor temporal (see also Quantitative Genetics for further discussion). Therefore, all gamete recombination (fertilization) is uniformally possible. Both the Wahlund effect and the Hardy Weinberg equilibrium assume that the overall population is panmictic. In Genetics and Heredity random mating usually implies the hybridising (mating) of individuals regardless of any spatial, physical, genetical, temporal or social preference. That is, the mating between two organisms is not influenced by any environmental, nor hereditary interaction. Hence, potential mates have an equal chance of being contributors to the fertilizing gamete pool. If there is no random sub-sampling of gametes involved in the fertilization cohort, panmixia has occurred. Such uniform random mating is distinct from lack of natural selection: in viability selection for instance, selection occurs before mating. Description In simple terms, panmixia (or panmicticism) is the ability of individuals in a population to interbreed without restrictions; individuals are able to move about freely within their habitat, possibly over a range of hundreds to thousands of miles, and thus breed with other members of the population. To signify the importance of this, imagine several different finite populations of the same species (for example: a grazing herbivore), isolated from each other by some physical characteristic of the environment (dense forest areas separating grazing lands). As time progresses, natural selection and genetic drift will slowly move each population toward genetic differentiation that would
https://en.wikipedia.org/wiki/Hazardous%20Inflight%20Weather%20Advisory%20Service
Hazardous Inflight Weather Advisory Service (HIWAS) was a continuous broadcast of hazardous weather information which is transmitted over selected VORs. This hazardous weather includes AIRMETs, SIGMETs, Convective SIGMETs, Center Weather Advisories (CWAs), Severe Alert Weather Watches (AWWs), and urgent PIREPs. The presence of HIWAS information on a VOR is indicated on a sectional or terminal area chart by an "H" in the upper-right corner of the box surrounding the NAVAID frequency. On 8 January 2020 the Federal Aviation Administration discontinued the HIWAS service in favor of Flight Information Services-Broadcast (FIS-B) and other modern means of accessing in-flight weather data.
https://en.wikipedia.org/wiki/Talaria%20projector
Talaria was the brand name of a large-venue video projector from General Electric introduced in 1983. Light from a Xenon arc lamp was modulated by a light valve consisting of a rotating glass disc that was continuously re-coated with a viscous oil. An electron beam similar to the one in a cathode ray tube traced a raster on the surface of the coated glass, deforming the surface of the oil. Where the oil was undisturbed, the light would be reflected into a light trap. The raster traced into the oil formed a diffraction grating. The basic unit was monochrome (PJ7000 line). Color display is accomplished in one of two ways: The single lens color projector (PJ5000 line) use dichroic filters to separate the white light of the xenon bulb in two channels, Green and Magenta. RGB color separation and processing is obtained using vertical wobbulation of the electron beam on the oil film to modulate the green channel and sawtooth modulation is added to the horizontal sweep to separate and modulate Red and Blue channels. The optical system used in the Talaria line is a Schlieren optic like an Eidophor, but the color extraction is much more complex. Two units (MLV) or three units (3LV) are stacked one atop the other, each one devoted to a single color (3LV). In early models (PJ5000), the light source was a 650 watt xenon bulb (sealed beam) similar to the units in modern 35mm film projectors, and produced 250 lumens at a 75:1 contrast ratio. The later 3LV model produced as much as 3500 lumens at a 250:1 contrast ratio. The later LV series had an optional "Multiple Personality" (MP) module that would allow the projector to display various resolutions and scan rates produced by computers of the time. It could produce an 8,000 lumen image onto a 15 foot by 20 foot screen from 64 feet away. See also Comparison of display technology