source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Alternating%20multilinear%20map | In mathematics, more specifically in multilinear algebra, an alternating multilinear map is a multilinear map with all arguments belonging to the same vector space (for example, a bilinear form or a multilinear form) that is zero whenever any pair of arguments is equal. More generally, the vector space may be a module over a commutative ring.
The notion of alternatization (or alternatisation) is used to derive an alternating multilinear map from any multilinear map with all arguments belonging to the same space.
Definition
Let be a commutative ring and be modules over . A multilinear map of the form is said to be alternating if it satisfies the following equivalent conditions:
whenever there exists such that then
whenever there exists such that then
Vector spaces
Let be vector spaces over the same field. Then a multilinear map of the form is alternating iff it satisfies the following condition:
if are linearly dependent then .
Example
In a Lie algebra, the Lie bracket is an alternating bilinear map.
The determinant of a matrix is a multilinear alternating map of the rows or columns of the matrix.
Properties
If any component of an alternating multilinear map is replaced by for any and in the base ring then the value of that map is not changed.
Every alternating multilinear map is antisymmetric, meaning that
or equivalently,
where denotes the permutation group of order and is the sign of
If is a unit in the base ring then every antisymmetric -multilinear form is alternating.
Alternatization
Given a multilinear map of the form the alternating multilinear map defined by
is said to be the alternatization of
Properties
The alternatization of an n-multilinear alternating map is n! times itself.
The alternatization of a symmetric map is zero.
The alternatization of a bilinear map is bilinear. Most notably, the alternatization of any cocycle is bilinear. This fact plays a crucial role in identifying the second cohomology |
https://en.wikipedia.org/wiki/Cellular%20Potts%20model | In computational biology, a Cellular Potts model (CPM, also known as the Glazier-Graner-Hogeweg model) is a computational model of cells and tissues. It is used to simulate individual and collective cell behavior, tissue morphogenesis and cancer development. CPM describes cells as deformable objects with a certain volume, that can adhere to each other and to the medium in which they live. The formalism can be extended to include cell behaviours such as cell migration, growth and division, and cell signalling. The first CPM was proposed for the simulation of cell sorting by François Graner and James Glazier as a modification of a large-Q Potts model. CPM was then popularized by Paulien Hogeweg for studying morphogenesis.
Although the model was developed to describe biological cells, it can also be used to model individual parts of a biological cell, or even regions of fluid.
Model description
The CPM consists of a rectangular Euclidean lattice, where each cell is a subset of lattice sites sharing the same cell ID (analogous to spin in Potts models in physics). Lattice sites that are not occupied by cells are the medium. The dynamics of the model are governed by an energy function: the Hamiltonian which describes the energy of a particular configuration of cells in the lattice. In a basic CPM, this energy results from adhesion between cells and resistance of cells to volume changes. The algorithm for updating CPM minimizes this energy.
In order to evolve the model Metropolis-style updates are performed, that is:
choose a random lattice site
choose a random neighboring lattice site to copy its ID into .
calculate the difference in energy () between the original and the proposed new configuration.
accept or reject this copy event based on the change in energy , as follows:
if the new energy is lower, always accept the copy;
if the new energy is higher, accept the copy with probability (the Boltzmann temperature determines the likelihood of energeticall |
https://en.wikipedia.org/wiki/Miscanthus%20%C3%97%20giganteus | {{taxobox
|name = Miscanthus × giganteus
|image = Miscanthus Bestand.JPG
|regnum = Plantae
|unranked_divisio = Angiosperms
|unranked_classis = Monocots
|unranked_ordo = Commelinids
|ordo = Poales
|familia = Poaceae
|subfamilia = Panicoideae
|genus = Miscanthus
|species = M. × giganteus
|binomial = Miscanthus × giganteus
|binomial_authority = J.M.Greef , Deuter ex Hodk., Renvoize 2001
|synonyms_ref=
|synonyms =
Miscanthus × changii Y.N.Lee
Miscanthus × latissimus Y.N.Lee
Miscanthus × longiberbis (Hack.) Nakai
Miscanthus × longiberbis var. changii (Y.N.Lee) Ibaragi & H.Ohashi
Miscanthus × longiberbis f. ogiformis (Honda) Ibaragi
Miscanthus matsumurae var. longiberbis Hack.
Miscanthus × ogiformis Honda
Miscanthus oligostachyus subsp. longiberbis (Hack.) T.Koyama
Miscanthus sacchariflorus var. brevibarbis (Honda) Adati
Miscanthus sinensis 'Giganteus'
}}Miscanthus × giganteus, also known as the giant miscanthus, is a sterile hybrid of Miscanthus sinensis and Miscanthus sacchariflorus. It is a perennial grass with bamboo-like stems that can grow to heights of 3– in one season (from the third season onwards). Just like Pennisetum purpureum, Arundo donax and Saccharum ravennae, it is also called elephant grass.Miscanthus × giganteus' perennial nature, its ability to grow on marginal land, its water efficiency, non-invasiveness, low fertilizer needs, significant carbon sequestration and high yield have sparked significant interest among researchers, with some arguing that it has "ideal" energy crop properties. Some argue that it can provide negative emissions, while others highlight its water cleaning and soil enhancing qualities. There are practical and economic challenges related to its use in the existing, fossil based combustion infrastructure, however. Torrefaction and other fuel upgrading techniques are being explored as countermeasures to this problem.
Use areasMiscanthus × giganteus is mainly used as raw material for solid biofuels. It can be burned direct |
https://en.wikipedia.org/wiki/Mason%20equation | The Mason equation is an approximate analytical expression for the growth (due to condensation) or evaporation of a water droplet—it is due to the meteorologist B. J. Mason. The expression is found by recognising that mass diffusion towards the water drop in a supersaturated environment transports energy as latent heat, and this has to be balanced by the diffusion of sensible heat back across the boundary layer, (and the energy of heatup of the drop, but for a cloud-sized drop this last term is usually small).
Equation
In Mason's formulation the changes in temperature across the boundary layer can be related to the changes in saturated vapour pressure by the Clausius–Clapeyron relation; the two energy transport terms must be nearly equal but opposite in sign and so this sets the interface temperature of the drop. The resulting expression for the growth rate is significantly lower than that expected if the drop were not warmed by the latent heat.
Thus if the drop has a size r, the inward mass flow rate is given by
and the sensible heat flux by
and the final expression for the growth rate is
where
S is the supersaturation far from the drop
L is the latent heat
K is the vapour thermal conductivity
D is the binary diffusion coefficient
R is the gas constant |
https://en.wikipedia.org/wiki/Cotlar%E2%80%93Stein%20lemma | In mathematics, in the field of functional analysis, the Cotlar–Stein almost orthogonality lemma is named after mathematicians Mischa Cotlar
and Elias Stein. It may be used to obtain information on the operator norm on an operator, acting from one Hilbert space into another
when the operator can be decomposed into almost orthogonal pieces.
The original version of this lemma
(for self-adjoint and mutually commuting operators)
was proved by Mischa Cotlar in 1955 and allowed him to conclude that the Hilbert transform
is a continuous linear operator in
without using the Fourier transform.
A more general version was proved by Elias Stein.
Cotlar–Stein almost orthogonality lemma
Let be two Hilbert spaces.
Consider a family of operators
, ,
with each
a bounded linear operator from to .
Denote
The family of operators
,
is almost orthogonal if
The Cotlar–Stein lemma states that if
are almost orthogonal,
then the series
converges in the strong operator topology,
and that
Proof
If R1, ..., Rn is a finite collection of bounded operators, then
So under the hypotheses of the lemma,
It follows that
and that
Hence the partial sums
form a Cauchy sequence.
The sum is therefore absolutely convergent with limit satisfying the stated inequality.
To prove the inequality above set
with |aij| ≤ 1 chosen so that
Then
Hence
Taking 2mth roots and letting m tend to ∞,
which immediately implies the inequality.
Generalization
There is a generalization of the Cotlar–Stein lemma with sums replaced by integrals. Let X be a locally compact space and μ a Borel measure on X. Let T(x) be a map from X into bounded operators from E to F which is uniformly bounded and continuous in the strong operator topology. If
are finite, then the function T(x)v is integrable for each v in E with
The result can be proved by replacing sums by integrals in the previous proof or by using Riemann sums to approximate the integrals.
Example
Here is an example of an orthogonal family of ope |
https://en.wikipedia.org/wiki/Bridge%20locus | In neuroscience the bridge locus for a particular sensory percept is a hypothetical set of neurons whose activity is the basis of that sensory percept. The term was introduced by D.N. Teller and E.Y. Pugh Jr. in 1983, and has been sparingly used. Activity in the bridge locus neurons is postulated to be necessary and sufficient for sensory perception: if the bridge locus neurons are not active, then the sensory perception does not occur, regardless of the actual sensory input. Conversely if the bridge locus neurons are active, then sensory perception occurs, regardless of the actual sensory input. It is the highest neural level of a sensory perception. So, for example, retinal neurons are not considered a bridge locus for visual perception because stimulating visual cortex can give rise to visual percepts.
Not all scholars believe in such a neural correlate of consciousness. Pessoa et al., for example, argue that there is no necessity for a bridge locus, basing their argument on the requirement of an isomorphism between neural states and conscious states. Thompson argues that there are good reasons to think that the notion of a bridge locus, which he calls a "localizationist approach", is misguided, questioning the premise that there has to be one particular neural stage whose activity forms the immediate substrate of perception. He argues, based upon work by Zeki & Shipp, DeYoe & Van Essen, and others, that brain regions are not independent stages or modules but have dense forward and backward projections that act reciprocally, and that visual processing is highly interactive and context-dependent. He also argues that cells in the visual cortex "are not mere 'feature detectors, and that neuroscience has revealed that the brain in fact employs distributed networks, rather than centralized representations. He equates the notion of a bridge locus to a Cartesian theatre and suggests that as a notion it should be abandoned. |
https://en.wikipedia.org/wiki/TI-990 | The TI-990 was a series of 16-bit minicomputers sold by Texas Instruments (TI) in the 1970s and 1980s. The TI-990 was a replacement for TI's earlier minicomputer systems, the TI-960 and the TI-980. It had several unique features, and was easier to program than its predecessors.
Among its core concepts was the ability to easily support multiprogramming using a software-switchable set of processor registers that allowed it to perform rapid context switches between programs. This was enabled through the use of register values stored in main memory that could be swapped by changing a single pointer.
TI later implemented the TI-990 in a single-chip implementation, the TMS9900, among the first 16-bit microprocessors. Intended for use in low-end models of the TI-990, it retained the 990's memory model and main memory register system. This design was ultimately much more widely used in the TI-99/4A, where details of its minicomputer-style memory model presented significant disadvantages.
Features
Workspaces
On the TI-990, registers are stored in memory and are referred to through a hard register called the Workspace Pointer. The concept behind the workspace is that main memory was based on the new semiconductor RAM chips that TI had developed and ran at the same speed as the CPU. This meant that it didn't matter if the "registers" were real registers in the CPU or represented in memory. When the Workspace Pointer is loaded with a memory address, that address is the origin of the "registers."
There are three hard registers in the 990; the Workspace Pointer (WP), the Program Counter (PC) and the Status register (ST). A context switch entailed the saving and restoring of only the hard registers.
Extended operation
The TI-990 had a facility to allow extended operations through the use of plug in hardware. If the hardware is not present the CPU traps to allow software to perform the function. The operation code (XOP) allowed for 15 attached devices on a system. Although, d |
https://en.wikipedia.org/wiki/Titratable%20acid | In chemistry, titratable acid generally refers to any acid that can lose one or more protons in an acid–base reaction.
The term is used slightly differently in other fields. For example, in renal physiology, titratable acid is a term to describe acids such as phosphoric acid, sulfuric acid which are involved in renal physiology. It is used to explicitly exclude ammonium (NH4+) as a source of acid, and is part of the calculation for net acid excretion.
It gets its name from the use of NaOH in acid–base titration to estimate the quantity of titratable acid.
See also
Acids in wine |
https://en.wikipedia.org/wiki/DX10 | DX10 was a general purpose international, multitasking operating system designed to operate
with the Texas Instruments 990/10, 990/10A and 990/12 minicomputers using the memory mapping feature.
The Disk Executive Operating System (DX10)
DX10 was a versatile disk-based operating system capable of supporting a wide
range of commercial and industrial applications.
DX10 was also a multiterminal system capable of making each of several users
appear to have exclusive control of the system.
DX10 was an international operating system designed to meet the commercial
requirements of the United States, most European countries, and Japan.
DX10 supported several models of video display terminals (VDTs), most of which
permit users to enter, view, and process data in their own language.
DX10 Capabilities
DX10 required a basic hardware configuration, but allows additional members of
an extensive group of peripherals to be included in the configuration.
During system generation, the user could configure DX10 to support peripheral devices
that are not members of the 990 family and devices that require realtime
support.
This capability required that the user also provide software control for these
devices.
The user communicated with DX10 easily through the System Command Interpreter (SCI).
SCI was designed to provide simple, convenient interaction between the user and
DX10 in a conversational format.
Through SCI the user had access to complete control of DX10.
SCI was flexible in its mode of communication.
While SCI is convenient for interactive communication through a data terminal,
SCI can be accessed in batch mode as well.
DX10 was capable of extensive file management.
The built-in file structures include key indexed files, relative record files,
and sequential files.
A group of file control utilities exists for copying and modifying files, and
controlling file parameters.
DX10 Features
DX10 offered a number of features that provide convenient use of the minicomputers syst |
https://en.wikipedia.org/wiki/Potamkin%20Prize | The Potamkin Prize for Research in Pick's, Alzheimer's, and Related Diseases was established in 1988 and is sponsored by the American Academy of Neurology. The prize is funded through the philanthropy of the Potamkin Foundation. The prize is awarded for achievements on emerging areas of research in Pick's disease, Alzheimer's disease and other dementias.
The award includes a medallion, $100,000 prize, and a 20-minute lecture at the American Academy of Neurology annual meeting. The prize is named after Luba Potamkin (wife of Victor Potamkin) who, in 1978, was diagnosed with a form of dementia which was identified as Pick's disease, a form of frontotemporal dementia.
A website dedicated to the Potamkin Prize was launched in 2020 and included background on the prize, biographies of past winners, and information about applying or being nominated.
Awards
Source (to 2017): American Academy of Neurology
2021: Kenneth Kosik, Giovanna Mallucci
2020: J. Paul Taylor
2019: Randall J. Bateman
2018: David Bennett
2017: Claudia Kawas, Kristine Yaffe
2016: Rosa Rademakers, Bryan J. Traynor
2015: Peter Davies, Reisa Sperling
2014:
2013: , ,
2012:
2011: , , Eva-Maria Mandelkow
2010: ,
2009: , ,
2008: , William E. Klunk, and Chester A. Mathis
2007:
2006: Karen Ashe, Karen Duff, and Bradley Hyman
2005: , and
2004: ,
2003: David M. Holtzman,
2002: Christian Haass, Bart De Strooper
2001:
2000: Maria Grazia Spillantini,
1999: , ,
1998: Michel Goedert, Virginia M.-Y. Lee, John Q. Trojanowski
1997: , ,
1996: Rudolph Tanzi, Peter St. George-Hyslop
1995: , Khalid Iqbal,
1994: , Gerard D. Schellenberg
1993: , Alison Goate, John Hardy, Christine Van Broeckhoven
1992: Donald L. Price,
1991: Stanley B. Prusiner
1990: Colin L. Masters, Konrad Beyreuther
1989: Dennis Selkoe, George G. Glenner
1988:
See also
List of medicine awards
List of neuroscience awards |
https://en.wikipedia.org/wiki/The%20Einstein%20Theory%20of%20Relativity | The Einstein Theory of Relativity (1923) is a silent animated short film directed by Dave Fleischer and released by Fleischer Studios.
History
In August 1922, Scientific American published an article explaining their position that a silent film would be unsuccessful in presenting the theory of relativity to the general public, arguing that only as part of a broader educational package including lecture and text would such film be successful. Scientific American then went on to review frames from an unnamed German film reported to be financially successful.
Six months later, on February 8, 1923, the Fleischers released their relativity film, produced in collaboration with popular science journalist Garrett P. Serviss to accompany his book on the same topic. Two versions of the Fleischer film are reported to exist – a shorter two-reel (20 minute) edit intended for general theater audiences, and a longer five-reel (50 minute) version intended for educational use.
The Fleischers lifted footage from the German predecessor, Die Grundlagen der Einsteinschen Relativitäts-Theorie, directed by Hanns-Walter Kornblum, for inclusion into their film. Presented here are images from the Fleischer film and German film. If actual footage was not recycled into The Einstein Theory of Relativity, these images and text from the Scientific American article suggest that original visual elements from the German film were.
This film, like much of the Fleischer's work, has fallen into the public domain. Unlike Fleischer Studio's Superman or Betty Boop cartoons, The Einstein Theory of Relativity has very few existing prints and is available in 16mm from only a few specialized film preservation organizations. |
https://en.wikipedia.org/wiki/Veriato | Veriato, formerly known as SpectorSoft, is a software company that develops and sells user behavior analytics and employee monitoring software.
It is based in Palm Beach Gardens, located in Palm Beach County, Florida, in the United States.
History
Founded in 1998, the company was an early entrant in internet monitoring software.
In 2008, private equity firms Harbourvest Partners and WestView Capital Partners invested in the company, taking a majority ownership position. SpectorSoft originally serviced both consumer and business customers, but no longer sells software for consumer or home use.
In 2011, the company opened their West Palm Beach office and London office.
In 2012 SpectorSoft acquired the assets of Corner Bowl Software.
On March 31, 2015, the company completely exited the consumer market.
In 2016 the company was renamed Veriato, Inc.
On June 12, 2019, Veriato was acquired by Awareness Technologies.
Awards and honors
Spector Pro was given the PC Magazine editors' choice award in a 2002 review of six computer activity monitoring tools.
In 2004, version 5.0 of Spector Pro was again given the editors' choice award from a field of four programs.
In 2008 Spector 360 SR3 won the PC Magazine editors' choice award.
In 2004, SpectorSoft was listed for the first time as one of Inc. magazine's Top 500 fastest-growing private companies in America, at position 224.
The company achieved a position on the list once again in 2005 at number 497.
In 2009 SpectorSoft made the list of Inc. 5000 Companies at number 3340.
In 2010 and 2011 SpectorSoft again made the Inc. Magazine 500/5000 list.
Spectorsoft−Veriato products have been mentioned in ZDNET, PC/Computing, Time, CNN, NBC Nightly News, The New York Times and The Wall Street Journal.
In 2014, Spectorsoft won the SC Magazine Europe Best Fraude Prevention Award and the Best of Interop 2014 - Security Award for their Spector 360 Recon product
Patent Infringement case
Patent Infringement Case Helios Software |
https://en.wikipedia.org/wiki/5-HT3%20receptor | {{DISPLAYTITLE:5-HT3 receptor}}The 5-HT3 receptor belongs to the Cys-loop superfamily of ligand-gated ion channels (LGICs) and therefore differs structurally and functionally from all other 5-HT receptors (5-hydroxytryptamine, or serotonin receptors) which are G protein-coupled receptors. This ion channel is cation-selective and mediates neuronal depolarization and excitation within the central and peripheral nervous systems.
As with other ligand gated ion channels, the 5-HT3 receptor consists of five subunits arranged around a central ion conducting pore, which is permeable to sodium (Na), potassium (K), and calcium (Ca) ions. Binding of the neurotransmitter 5-hydroxytryptamine (serotonin) to the 5-HT3 receptor opens the channel, which, in turn, leads to an excitatory response in neurons. The rapidly activating, desensitizing, inward current is predominantly carried by sodium and potassium ions. 5-HT3 receptors have a negligible permeability to anions. They are most closely related by homology to the nicotinic acetylcholine receptor.
Structure
The 5-HT3 receptor differs markedly in structure and mechanism from the other 5-HT receptor subtypes, which are all G-protein-coupled. A functional channel may be composed of five identical 5-HT3A subunits (homopentameric) or a mixture of 5-HT3A and one of the other four 5-HT3B, 5-HT3C, 5-HT3D, or 5-HT3E subunits (heteropentameric). It appears that only the 5-HT3A subunits form functional homopentameric channels. All other subunit subtypes must heteropentamerize with 5-HT3A subunits to form functional channels. Additionally, there has not currently been any pharmacological difference found between the heteromeric 5-HT3AC, 5-HT3AD, 5-HT3AE, and the homomeric 5-HT3A receptor. N-terminal glycosylation of receptor subunits is critical for subunit assembly and plasma membrane trafficking. The subunits surround a central ion channel in a pseudo-symmetric manner (Fig.1). Each subunit comprises an extracellular N-terminal doma |
https://en.wikipedia.org/wiki/Biological%20response%20modifier | Biological response modifiers (BRMs) are substances that modify immune responses. They can be both endogenous (produced naturally within the body) and exogenous (as pharmaceutical drugs), and they can either enhance an immune response or suppress it. Some of these substances arouse the body's response to an infection, and others can keep the response from becoming excessive. Thus they serve as immunomodulators in immunotherapy (therapy that makes use of immune responses), which can be helpful in treating cancer (where targeted therapy often relies on the immune system being used to attack cancer cells) and in treating autoimmune diseases (in which the immune system attacks the self), such as some kinds of arthritis and dermatitis. Most BRMs are biopharmaceuticals (biologics), including monoclonal antibodies, interleukin 2, interferons, and various types of colony-stimulating factors (e.g., CSF, GM-CSF, G-CSF). "Immunotherapy makes use of BRMs to enhance the activity of the immune system to increase the body's natural defense mechanisms against cancer", whereas BRMs for rheumatoid arthritis aim to reduce inflammation.
Some of the effects of BRMs include nausea and vomiting, diarrhea, loss of appetite, fever and chills, muscle aches, weakness, skin rash, an increased tendency to bleed, or swelling. For example, patients with systemic lupus erythematosus (SLE) who are treated with standard of care, including biologic response modifiers, experience a higher risk of mortality and opportunistic infection compared to the general population.
Abciximab
Mechanism of action: A monoclonal antibody that binds to the glycoprotein receptor IIb/IIIa on activated platelets, preventing aggregation.
Clinical use: Acute coronary syndromes, percutaneous transluminal coronary angioplasty.
Toxicity: Bleeding, thrombocytopenia.
Anakinra (Kineret)
Mechanism of action: A recombinant version of the Interleukin 1 receptor antagonist.
Clinical use: Rheumatoid arthritis.
Toxicity: Allergic re |
https://en.wikipedia.org/wiki/X%283872%29 | The X(3872) is an exotic meson candidate with a mass of 3871.68 MeV/c2 which does not fit into the quark model because of its quantum numbers. It was first discovered in 2003 by the Belle experiment in Japan and later confirmed by several other experimental collaborations. Several theories have been proposed for its nature, such as a mesonic molecule or a diquark-antidiquark pair (tetraquark).
The quantum numbers of X(3872) have been determined by the LHCb experiment at CERN in March 2013. The values for JPC are 1++.
The first evidence of X(3872) production in the quark–gluon plasma have been reported by the CMS experiment at CERN in January 2022.
See also
Meson
XYZ particle
Y(4140)
Z(4430)
Zc(3900)
Notes |
https://en.wikipedia.org/wiki/Automated%20Lip%20Reading | Automated Lip Reading (ALR) is a software technology developed by speech recognition expert Frank Hubner. A video image of a person talking can be analysed by the software. The shapes made by the lips can be examined and then turned into sounds. The sounds are compared to a dictionary to create matches to the words being spoken.
The technology was used successfully to analyse silent home movie footage of Adolf Hitler taken by Eva Braun at their Bavarian retreat Berghof.
The video, with words, was included in a documentary titled "Hitler's Private World", Revealed Studios, 2006
Source: New Technology catches Hitler off guard
See also
Audio-visual speech recognition
Silent speech interface
Articulatory speech recognition
Computational linguistics
Facial motion capture
lip-reading
Computational linguistics
Speech recognition
Applications of computer vision |
https://en.wikipedia.org/wiki/Mesonic%20molecule | A mesonic molecule is a set of two or more mesons bound together by the strong force. Unlike baryonic molecules, which form the nuclei of all elements in nature save hydrogen-1, a mesonic molecule has yet to be definitively observed. The X(3872) discovered in 2003 and the Z(4430) discovered in 2007 by the Belle experiment are the best candidates for such an observation.
See also
Meson
Tetraquark
Pionium |
https://en.wikipedia.org/wiki/Syncword | In computer networks, a syncword, sync character, sync sequence or preamble is used to synchronize a data transmission by indicating the end of header information and the start of data. The syncword is a known sequence of data used to identify the start of a frame, and is also called reference signal or midamble in wireless communications.
Prefix codes allow unambiguous identification of synchronization sequences and may serve as self-synchronizing code.
Examples
In an audio receiver receiving a bit stream of data, an example of a syncword is 0x0B77 for an AC-3 encoded stream.
An Ethernet packet with the Ethernet preamble, 56 bits of alternating 1 and 0 bits, allowing the receiver to synchronize its clock to the transmitter, followed by a one-octet start frame delimiter byte and then the header.
All USB packets begin with a sync field (8 bits long at low speed, 32 bits long at high speed) used to synchronize the receiver's clock to the transmitter's clock.
A receiver uses a physical layer preamble, also called a physical layer training sequence, to synchronize on the signal by estimating frequency and clock offsets.
Some documentation uses "preamble" to refer to a signal used to announce a transmission, to wake-up receivers in a low-power mode.
While some systems use exactly the same signal for both physical-layer training and wake-up functions, others use 2 different signals at 2 different times for these 2 functions, or have only one or the other of these signals.
The Bisync protocol of the 1960s used a minimum of two ASCII "SYN" characters (0x16…0x16) to achieve character synchronization in an undifferentiated bit stream, then other special characters to synchronize to the beginning of a frame of characters.
The syncwords can be seen as a kind of delimiter. Various techniques are used to avoid delimiter collision, orin other wordsto "disguise" bytes of data at the data link layer that might otherwise be incorrectly recognized as the syncword. For exam |
https://en.wikipedia.org/wiki/Sunflower%20%28mathematics%29 | In the mathematical fields of set theory and extremal combinatorics, a sunflower or -system is a collection of sets in which all possible distinct pairs of sets share the same intersection. This common intersection is called the kernel of the sunflower.
The naming arises from a visual similarity to the botanical sunflower, arising when a Venn diagram of a sunflower set is arranged in an intuitive way. Suppose the shared elements of a sunflower set are clumped together at the centre of the diagram, and the nonshared elements are distributed in a circular pattern around the shared elements. Then when the Venn diagram is completed, the lobe-shaped subsets, which encircle the common elements and one or more unique elements, take on the appearance of the petals of a flower.
The main research question arising in relation to sunflowers is: under what conditions does there exist a large sunflower (a sunflower with many sets) in a given collection of sets? The -lemma, sunflower lemma, and the Erdős-Rado sunflower conjecture give successively weaker conditions which would imply the existence of a large sunflower in a given collection, with the latter being one of the most famous open problems of extremal combinatorics.
Formal definition
Suppose is a set system over , that is, a collection of subsets of a set . The collection is a sunflower (or -system) if there is a subset of such that for each distinct and in , we have . In other words, a set system or collection of sets is a sunflower if all sets in W share the same common subset of elements. An element in is either found the common subset or else appears in at most one of the elements. No element of is shared by just some of the subset, but not others. Note that this intersection, , may be empty; a collection of pairwise disjoint subsets is also a sunflower. Similarly, a collection of sets each containing the same elements is also trivially a sunflower.
Sunflower lemma and conjecture
The study of sunfl |
https://en.wikipedia.org/wiki/Ramification%20%28botany%29 | In botany, ramification is the divergence of the stem and limbs of a plant into smaller ones, i.e., trunk into branches, branches into increasingly smaller branches, and so on. Gardeners stimulate the process of ramification through pruning, thereby making trees, shrubs, and other plants bushier and denser.
Short internodes (the section of stem between nodes, i.e., areas where leaves are produced) help increase ramification in those plants that form branches at these nodes. Long internodes (which may be the result of over-watering, the over-use of fertilizer, or a seasonal "growth spurt") decrease a gardener's ability to induce ramification in a plant.
A high degree of ramification is essential for the creation of topiary as it enables the topiary artist to carve a bush or hedge into a shape with an even surface. Ramification is also essential to practitioners of the art of bonsai as it helps re-create the form and habit of a full-size tree in a small tree grown in a container.
The pruning practices of coppicing and pollarding induce ramification by removing most of a tree's mass above the root. Fruit tree pruning increases the yield of orchards by inducing ramification and thereby creating many vigorous, fruitful branches in the place of a few less-fruitful ones.
External links
Annals of Botany 91: 559-569, 2003
Plant Hormones
Hormones, Light and Flowering
Leonardo DaVinci's Botany for painters
Plant physiology
Horticulture
Plant anatomy
Plant morphology |
https://en.wikipedia.org/wiki/IBM%20M44/44X | The IBM M44/44X was an experimental computer system from the mid-1960s, designed and operated at IBM's Thomas J. Watson Research Center at Yorktown Heights, New York. It was based on a modified IBM 7044 (the 'M44'), and simulated multiple 7044 virtual machines (the '44X'), using both hardware and software. Key team members were Dave Sayre and Rob Nelson. This was a groundbreaking machine, used to explore paging, the virtual machine concept, and computer performance measurement. It was purely a research system, and was cited in 1981 by Peter Denning as an outstanding example of experimental computer science.
The term virtual machine probably originated with the M44/44X project, from which it was later appropriated by the CP-40 team to replace their earlier term pseudo machine.
Unlike CP-40 and later CP/CMS control programs, M44/44X did not implement a complete simulation of the underlying hardware (i.e. full virtualization). CP-40 project leader Robert Creasy observed:
The M44/44X "was about as much of a virtual machine system as CTSS – which is to say that it was close enough to a virtual machine system to show that 'close enough' did not count. I never heard a more eloquent argument for virtual machines than from Dave Sayre."
M44/44X "implanted the idea that the virtual machine concept is not necessarily less efficient than more conventional approaches" – a core assumption in the CP/CMS architecture, and one that ultimately proved very successful. |
https://en.wikipedia.org/wiki/Stereo%20cameras | The stereo cameras approach is a method of distilling a noisy video signal into a coherent data set that a computer can begin to process into actionable symbolic objects, or abstractions. Stereo cameras is one of many approaches used in the broader fields of computer vision and machine vision.
Calculation
In this approach, two cameras with a known physical relationship (i.e. a common field of view the cameras can see, and how far apart their focal points sit in physical space) are correlated via software. By finding mappings of common pixel values, and calculating how far apart these common areas reside in pixel space, a rough depth map can be created. This is very similar to how the human brain uses stereoscopic information from the eyes to gain depth cue information, i.e. how far apart any given object in the scene is from the viewer.
The camera attributes must be known, focal length and distance apart etc., and a calibration done. Once this is completed, the systems can be used to sense the distances of objects by triangulation. Finding the same singular physical point in the two left and right images is known as the correspondence problem. Correctly locating the point gives the computer the capability to calculate the distance that the robot or camera is from the object. On the BH2 Lunar Rover the cameras use five steps: a bayer array filter, photometric consistency dense matching algorithm, a Laplace of Gaussian (LoG) edge detection algorithm, a stereo matching algorithm and finally uniqueness constraint.
Uses
This type of stereoscopic image processing technique is used in applications such as 3D reconstruction, robotic control and sensing, crowd dynamics monitoring and off-planet terrestrial rovers; for example, in mobile robot navigation, tracking, gesture recognition, targeting, 3D surface visualization, immersive and interactive gaming. Although the Xbox Kinect sensor is also able to create a depth map of an image, it uses an infrared camera for this pu |
https://en.wikipedia.org/wiki/Medical%20procedure | A medical procedure is a course of action intended to achieve a result in the delivery of healthcare.
A medical procedure with the intention of determining, measuring, or diagnosing a patient condition or parameter is also called a medical test. Other common kinds of procedures are therapeutic (i.e., intended to treat, cure, or restore function or structure), such as surgical and physical rehabilitation procedures.
Definition
"An activity directed at or performed on an individual with the object of improving health, treating disease or injury, or making a diagnosis." - International Dictionary of Medicine and Biology
"The act or conduct of diagnosis, treatment, or operation." - Stedman's Medical Dictionary by Thomas Lathrop Stedman
"A series of steps by which a desired result is accomplished." - Dorland's Medical Dictionary by William Alexander Newman Dorland
"The sequence of steps to be followed in establishing some course of action." - Mosby's Medical, Nursing, & Allied Health Dictionary
List of medical procedures
Propaedeutic
Auscultation
Medical inspection (body features)
Palpation
Percussion (medicine)
Vital signs measurement, such as blood pressure, body temperature, or pulse (or heart rate)
Diagnostic
Lab tests
Biopsy test
Blood test
Stool test
Urinalysis
Cardiac stress test
Electrocardiography
Electrocorticography
Electroencephalography
Electromyography
Electroneuronography
Electronystagmography
Electrooculography
Electroretinography
Endoluminal capsule monitoring
Endoscopy
Colonoscopy
Colposcopy
Cystoscopy
Gastroscopy
Laparoscopy
Laryngoscopy
Ophthalmoscopy
Otoscopy
Sigmoidoscopy
Esophageal motility study
Evoked potential
Magnetoencephalography
Medical imaging
Angiography
Aortography
Cerebral angiography
Coronary angiography
Lymphangiography
Pulmonary angiography
Ventriculography
Chest photofluorography
Computed tomography
Echocardiography
Electrical impedance tomography
Fluoroscopy
Magnetic resonance imag |
https://en.wikipedia.org/wiki/Geoboard | A geoboard is a mathematical manipulative used to explore basic concepts in plane geometry such as perimeter, area and the characteristics of triangles and other polygons. It consists of a physical board with a certain number of nails half driven in, around which are wrapped geo bands that are made of rubber. Normal rubber bands can also be used.
Geoboards were invented and popularized in the 1950s by Egyptian mathematician Caleb Gattegno (1911-1988).
Structure and use
Geoboard is a board. A variety of boards are used. Originally made out of plywood and brass nails or pegs, geoboards are now usually made out of plastic. They may have an upright square lattice of 9, 16 or 25 nails or more, or a circle of nails around a central nail. Students are asked to place rubber bands around the nails to explore geometric concepts or to solve mathematical puzzles.
Geoboards may be used to learn about:
plane shapes;
translation;
rotation;
reflection;
similarity;
co-ordination;
counting;
right angles;
pattern;
classification;
scaling;
position;
congruence;
area;
perimeter.
Two-dimensional representations of the geoboard may be applied to ordinary paper using rubber stamps or special "geoboard paper" with diagrams of geoboards may be used to help capture a student's explanations of the concept they have discovered or illustrated on the geoboard. There are also a number of online virtual geoboards. |
https://en.wikipedia.org/wiki/Eilenberg%E2%80%93Ganea%20conjecture | The Eilenberg–Ganea conjecture is a claim in algebraic topology. It was formulated by Samuel Eilenberg and Tudor Ganea in 1957, in a short, but influential paper. It states that if a group G has cohomological dimension 2, then it has a 2-dimensional Eilenberg–MacLane space . For n different from 2, a group G of cohomological dimension n has an n-dimensional Eilenberg–MacLane space. It is also known that a group of cohomological dimension 2 has a 3-dimensional Eilenberg−MacLane space.
In 1997, Mladen Bestvina and Noel Brady constructed a group G so that either G is a counterexample to the Eilenberg–Ganea conjecture, or there must be a counterexample to the Whitehead conjecture; in other words, not both conjectures can be true. |
https://en.wikipedia.org/wiki/Fractional%20excretion%20of%20sodium | The fractional excretion of sodium (FENa) is the percentage of the sodium filtered by the kidney which is excreted in the urine. It is measured in terms of plasma and urine sodium, rather than by the interpretation of urinary sodium concentration alone, as urinary sodium concentrations can vary with water reabsorption. Therefore, the urinary and plasma concentrations of sodium must be compared to get an accurate picture of kidney clearance. In clinical use, the fractional excretion of sodium can be calculated as part of the evaluation of acute kidney failure in order to determine if hypovolemia or decreased effective circulating plasma volume is a contributor to the kidney failure.
Calculation
FENa is calculated in two parts—figuring out how much sodium is excreted in the urine, and then finding its ratio to the total amount of sodium that passed through (aka "filtered by") the kidney.
First, the actual amount of sodium excreted is calculated by multiplying the urine sodium concentration by the urinary flow rate. This is the numerator in the equation. The denominator is the total amount of sodium filtered by the kidneys. This is calculated by multiplying the plasma sodium concentration by the glomerular filtration rate calculated using creatinine filtration. This formula is represented mathematically as:
[(Sodiumurinary × Flow rateurinary) ÷ ((Sodiumplasma) × ((Creatinineurinary × Flow rateurinary) ÷ (Creatinineplasma)))] × 100
Sodium (mmol/L)
Creatinine (mg/dL)
The flow rates cancel out in the above equation, simplifying to the standard equation:
For ease of recall, one can just remember the fractional excretion of sodium is the clearance of sodium divided by the glomerular filtration rate (i.e. the "fraction" excreted).
Interpretation
FENa can be useful in the evaluation of acute kidney failure in the context of low urine output. Low fractional excretion indicates sodium retention by the kidney, suggesting pathophysiology extrinsic to the urinary system s |
https://en.wikipedia.org/wiki/Whitehead%20conjecture | The Whitehead conjecture (also known as the Whitehead asphericity conjecture) is a claim in algebraic topology. It was formulated by J. H. C. Whitehead in 1941. It states that every connected subcomplex of a two-dimensional aspherical CW complex is aspherical.
A group presentation is called aspherical if the two-dimensional CW complex associated with this presentation is aspherical or, equivalently, if . The Whitehead conjecture is equivalent to the conjecture that every sub-presentation of an aspherical presentation is aspherical.
In 1997, Mladen Bestvina and Noel Brady constructed a group G so that either G is a counterexample to the Eilenberg–Ganea conjecture, or there must be a counterexample to the Whitehead conjecture; in other words, it is not possible for both conjectures to be true. |
https://en.wikipedia.org/wiki/Successive%20linear%20programming | Successive Linear Programming (SLP), also known as Sequential Linear Programming, is an optimization technique for approximately solving nonlinear optimization problems. It is related to, but distinct from, quasi-Newton methods.
Starting at some estimate of the optimal solution, the method is based on solving a sequence of first-order approximations (i.e. linearizations) of the model. The linearizations are linear programming problems, which can be solved efficiently. As the linearizations need not be bounded, trust regions or similar techniques are needed to ensure convergence in theory.
SLP has been used widely in the petrochemical industry since the 1970s.
See also
Sequential quadratic programming
Sequential linear-quadratic programming
Augmented Lagrangian method |
https://en.wikipedia.org/wiki/Fern%20spike | In paleontology, a fern spike is the occurrence of unusually high spore abundance of ferns in the fossil record, usually immediately (in a geological sense) after an extinction event. The spikes are believed to represent a large, temporary increase in the number of ferns relative to other terrestrial plants after the extinction or thinning of the latter. Fern spikes are strongly associated with the Cretaceous–Paleogene extinction event, although they have been found in other points of time and space such as at the Triassic-Jurassic boundary. Outside the fossil record, fern spikes have been observed to occur in response to local extinction events, such as the 1980 Mount St. Helens eruption.
Causes
Extinction events have historically been caused by massive environmental disturbances, such as meteor strikes. Volcanic eruptions can also wipe out local ecosystems through pyroclastic flows and landslides, leaving the ground bare for new colonization. For a population to recover and thrive after such an event, it must be able to tolerate the conditions of the disturbed environment. Ferns have multiple characteristics which predispose them to grow in those environments.
Spore characteristics
Plants generally reproduce with spores or seeds, meaning those will be what germinates in a disaster's aftermath. But spores have advantages over seeds in the environmental conditions produced by a disaster. They are generally produced in higher numbers than seeds, and are smaller, aiding wind dispersal. While many wind-dispersed pollens of seed plants are smaller and farther dispersed than spores, pollen cannot germinate into a plant and must land in a receptive flower. Some seed plants also require animals to disperse their seeds, which may not be present after a disaster. These characteristics allow ferns to rapidly colonize an area with their spores.
Fern spores require light to germinate. Following major disturbances that clear or reduce plant life, the ground would receive |
https://en.wikipedia.org/wiki/Tower%20array | A tower array is an arrangement of multiple radio towers which are mast radiators in a phased array. They were originally developed as ground-based tracking radars. Tower arrays can consist of free-standing or guyed towers or a mix of them. Tower arrays are used to constitute a directional antenna of a mediumwave or longwave radio station.
The number of towers in a tower array can vary.
In many arrays all towers have the same height, but there are also arrays of towers of different height. The arrangement can vary. For directional antennas with fixed radiation pattern, linear arrangements are preferred, while for switchable directional patterns (usually for daytime groundwave versus nighttime skywave), square arrangements are chosen.
Examples
Tower arrays with guyed masts
Longwave transmitter Europe 1
Transmitter Weisskirchen
Beidweiler Longwave Transmitter
Transmitter Wachenbrunn
Transmitter Ismaning (VoA-Station)
Tower arrays with free standing towers
Junglinster Longwave Transmitter
Orfordness transmitting station
See also
Directional antenna
Directional array
Longwave
Medium wave |
https://en.wikipedia.org/wiki/Pollen-presenter | A pollen-presenter is an area on the tip of the style in flowers of plants of the family Proteaceae on which the anthers release their pollen prior to anthesis. To ensure pollination, the style grows during anthesis, sticking out the pollen-presenter prominently, and so ensuring that the pollen easily contacts the bodies of potential pollination vectors such as bees, birds and nectarivorous mammals. The systematic depositing of pollen on the tip of the style implies the plants have some strategy to avoid excessive self-pollination. |
https://en.wikipedia.org/wiki/Tanh-sinh%20quadrature | Tanh-sinh quadrature is a method for numerical integration introduced by Hidetoshi Takahashi and Masatake Mori in 1974.
It is especially applied where singularities or infinite derivatives exist at one or both endpoints.
The method uses hyperbolic functions in the change of variables
to transform an integral on the interval x ∈ (−1, 1) to an integral on the entire real line t ∈ (−∞, ∞), the two integrals having the same value.
After this transformation, the integrand decays with a double exponential rate, and thus, this method is also known as the double exponential (DE) formula.
For a given step size , the integral is approximated by the sum
with the abscissas
and the weights
Use
The Tanh-Sinh method is quite insensitive to endpoint behavior. Should singularities or infinite derivatives exist at one or both endpoints of the (−1, 1) interval, these are mapped to the (−∞,∞) endpoints of the transformed interval, forcing the endpoint singularities and infinite derivatives to vanish. This results in a great enhancement of the accuracy of the numerical integration procedure, which is typically performed by the Trapezoidal rule. In most cases, the transformed integrand displays a rapid roll-off (decay), enabling the numerical integrator to quickly achieve convergence.
Like Gaussian quadrature, Tanh-Sinh quadrature is well suited for arbitrary-precision integration, where an accuracy of hundreds or even thousands of digits is desired. The convergence is exponential (in the discretization sense) for sufficiently well-behaved integrands: doubling the number of evaluation points roughly doubles the number of correct digits.
However, Tanh-Sinh quadrature is not as efficient as Gaussian quadrature for smooth integrands; but unlike Gaussian quadrature, tends to work equally well with integrands having singularities or infinite derivatives at one or both endpoints of the integration interval as already noted. Furthermore, Tanh-Sinh quadrature can be implemented in a |
https://en.wikipedia.org/wiki/CT2 | CT2 is a cordless telephony standard that was used in the early 1990s to provide short-range proto-mobile phone service in some countries in Europe and in Hong Kong. It is considered the precursor to the more successful DECT system. CT2 was also referred to by its marketing name, Telepoint.
Overview
CT2 is a digital FDMA system that uses time-division duplexing technology to share carrier frequencies between handsets and base stations. Features of the system are:
Standardized on 864–868 MHz
500 frames/second (alternately base station and handset)
100 kHz carriers
32 kbit/s ADPCM voice channel compression
10 mW maximum power output
GFSK data encoding
Up to 100 metre (300 ft) range
Unlike DECT, CT2 was a voice-only system, though like any minimally-compressed voice system, users could deploy analog modems to transfer data; in the early 1990s, Apple Computer sold a CT2 modem called the PowerBop to make use of France's Bi-Bop CT2 network. Although CT2 is a microcellular system, fully capable of supporting handoff, unlike DECT it does not support "forward handoff", meaning that it has to drop its former radio link before establishing the subsequent one, leading to a sub-second dropout in the call during the handover.
Deployment and usage
CT2 was deployed in a number of countries, including Britain and France. In Britain, the Ferranti Zonephone system was the first public network to go live in 1989, and the much larger Rabbit network – backed by Hong Kong's Hutchison Telecommunications – operated from 1992 to 1993. In France, the Bi-Bop network ran from 1991 to 1997. In the Netherlands, Dutch incumbent PTT deployed a CT2-based network called Greenpoint from 1992 to 1999; in the first year it used the name and mascot Kermit but royalties proved prohibitively large and the mascot was dropped. The service continued under the brand name Greenhopper, with at one time over 60,000 subscribers. In Finland, the Pointer service was available for a short time in the 19 |
https://en.wikipedia.org/wiki/Computer-adaptive%20sequential%20testing | Computer-adaptive sequential testing (CAST) is another term for multistage testing. A CAST test is a type of computer-adaptive test or computerized classification test that uses pre-defined groups of items called testlets rather than operating at the level of individual items. CAST is a term introduced by psychometricians working for the National Board of Medical Examiners. In CAST, the testlets are referred to as panels. |
https://en.wikipedia.org/wiki/Stentato | Stentato or stentando (the past participle and gerund of the Italian verb stentare "to find it hard to do something, to have difficulty doing something") is a musical expression which means "labored, heavy, in a dragging manner, sluggish", or "strong and forced". It is abbreviated "sten." or "stent." and is, for example, the direction given for the last 17 bars of the Sanctus of Giuseppe Verdi's Requiem and also used by Ottorino Respighi in his composition Pini di Roma.
Sometimes the term Stentate is used as well (e.g. Marchesi Opus 15, No. 13). This has the same meaning as Stentato or stentando. |
https://en.wikipedia.org/wiki/Jay%20Hambidge | Jay Hambidge (1867–1924) was an American artist who formulated the theory of "dynamic symmetry", a system defining compositional rules, which was adopted by several notable American and Canadian artists in the early 20th century.
Early life and theory
He was a pupil at the Art Students' League in New York and of William Merritt Chase, and a thorough student of classical art. He conceived the idea that the study of arithmetic with the aid of geometrical designs was the foundation of the proportion and symmetry in Greek architecture, sculpture and ceramics. Careful examination and measurements of classical buildings in Greece, among them the Parthenon, the temple of Apollo at Bassæ, of Zeus at Olympia and Athenæ at Ægina, prompted him to formulate the theory of "dynamic symmetry" as demonstrated in his works Dynamic Symmetry: The Greek Vase (1920) and The Elements of Dynamic Symmetry (1926). It created a great deal of discussion. He found a disciple in Dr. Lacey D. Caskey, the author of Geometry of Greek Vases (1922).
In 1921, articles critical of Hambidge's theories were published by Edwin M. Blake in Art Bulletin, and by Rhys Carpenter in American Journal of Archaeology. Art historian Michael Quick says Blake and Carpenter "used different methods to expose the basic fallacy of Hambidge's use of his system on Greek art—that in its more complicated constructions, the system could describe any shape at all." In 1979 Lee Malone said Hambidge's theories were discredited, but that they had appealed to many American artists in the early 20th century because "he was teaching precisely the things that certain artists wanted to hear, especially those who had blazed so brief a trail in observing the American scene and now found themselves displaced by the force of contemporary European trends."
Dynamic symmetry
Dynamic symmetry is a proportioning system and natural design methodology described in Hambidge's books. The system uses dynamic rectangles, including root rectangle |
https://en.wikipedia.org/wiki/Brinkman%20number | The Brinkman number (Br) is a dimensionless number related to heat conduction from a wall to a flowing viscous fluid, commonly used in polymer processing. It is named after the Dutch mathematician and physicist Henri Brinkman. There are several definitions; one is
where
μ is the dynamic viscosity;
u is the flow velocity;
κ is the thermal conductivity;
T0 is the bulk fluid temperature;
Tw is the wall temperature;
Pr is the Prandtl number
Ec is the Eckert number
It is the ratio between heat produced by viscous dissipation and heat transported by molecular conduction. i.e., the ratio of viscous heat generation to external heating. The higher its value, the slower the conduction of heat produced by viscous dissipation and hence the larger the temperature rise.
In, for example, a screw extruder, the energy supplied to the polymer melt comes primarily from two sources:
viscous heat generated by shear between elements of the flowing liquid moving at different velocities;
direct heat conduction from the wall of the extruder.
The former is supplied by the motor turning the screw, the latter by heaters. The Brinkman number is a measure of the ratio of the two. |
https://en.wikipedia.org/wiki/Email%20sender%20accreditation | Sender accreditation is a third-party process of verifying email senders and requiring them to adhere to certain accredited usage guidelines in exchange for being listed in a trusted listing that Internet Service Providers (ISPs) reference to allow certain emails to bypass email filters.
Overview
As email usage explodes, so does its abuse. In reaction to abuse such as spam and a more vicious, illegal variation known as phishing, most ISPs have enabled a block list feature to allow users to block specific email senders. Most ISPs have also partnered with spam filtering companies to improve email acceptance, handling, and delivery decisions. Ultimately, their goal is to block unwanted and suspicious types of emails that are either unrecognized or display characteristics of SPAM variants.
Accreditation Lists
These lists use similar technology as block lists to reinforce the original goal of spam filtering companies and ISPs - to improve the accuracy and relevance of email acceptance, handling, and delivery decisions. These lists are intended to help ensure email delivery from legitimate bulk and commercial email senders, and prevent them from being erroneously blocked as "spam".
See also
Certified email
Authenticated Email
Anti-spam techniques (email)
email filtering
Accreditation Resources
ISIPP
Email |
https://en.wikipedia.org/wiki/Verticillium%20wilt | Verticillium wilt is a wilt disease affecting over 350 species of eudicot plants. It is caused by six species of Verticillium fungi: V. dahliae, V. albo-atrum, V. longisporum, V. nubilum, V. theobromae and V. tricorpus. Many economically important plants are susceptible including cotton, tomatoes, potatoes, oilseed rape, eggplants, peppers and ornamentals, as well as others in natural vegetation communities. Many eudicot species and cultivars are resistant to the disease and all monocots, gymnosperms and ferns are immune.
Signs are superficially similar to Fusarium wilts. There are no fungicides characterized for the control of this disease but soil fumigation with chloropicrin has been proven successful in dramatically reducing Verticillium wilt in diverse crops such as vegetables using plasticulture production methods, and in non-tarped potato production in North America . Additional strategies to manage the disease include crop rotation, the use of resistant varieties and deep plowing (to accelerate the decomposition of infected plant residue). In recent years, pre-plant soil fumigation with chloropicrin in non-tarped, raised beds has proven to be economically viable and beneficial for reducing wilt disease and increasing yield and quality of potato in North America. Soil fumigation is a specialized practice requiring special permits, equipment, and expertise, so qualified personnel must be employed.
Hosts and symptoms
Verticillium spp. attack a very large host range including more than 350 species of vegetables, fruit trees, flowers, field crops, and shade or forest trees. Most vegetable species have some susceptibility, so it has a very wide host range. A list of known hosts is at the bottom of this page.
The symptoms are similar to most wilts with a few specifics to Verticillium. Wilt itself is the most common symptom, with wilting of the stem and leaves occurring due to the blockage of the xylem vascular tissues and therefore reduced water and nutrient |
https://en.wikipedia.org/wiki/Ys%20II%3A%20Ancient%20Ys%20Vanished%20%E2%80%93%20The%20Final%20Chapter | is a 1988 action role-playing game developed by Nihon Falcom. It is a sequel to Ys I: Ancient Ys Vanished and takes place immediately following it. The game first released for the PC-8801 and PC-9801 and has seen several ports and remakes since.
Ys II was later adapted into the anime Ys II: Castle in the Heavens (1992). DotEmu has released the game on Android with the following localizations: English, French, German, Italian, Korean, Japanese and Chinese.
Versions
Like its predecessor, Ys II was ported to various other platforms following its first release, such as the FM-7, X1, MSX2, and Famicom.
It was released along with its predecessor as part of the enhanced compilation, Ys I & II, for the TurboGrafx-CD by Hudson Soft in 1989. For many years this was the only version of Ys II that received an official English release.
An MS-DOS remake called Ys II Special, developed by Mantra, was released exclusively for the South Korean market in 1994. It was a mash-up of Ys II with the anime Ys II: Castle in the Heavens (1992) along with a large amount of new content, including more secrets than any other version of the game. The game was a success in Korea, despite competition from the Korean RPG Astonishia Story that same year.
Years later, a third remake was released for Microsoft Windows-based PCs as Ys II Eternal, and later as Ys II Complete. Versions of the game have also been developed for mobile phone platforms.
Ys II was also remade in 2008 for the Nintendo DS. An English translation of this version was released by Atlus in North America, along with the DS version of Ys I on a single card, as Legacy of Ys: Books I & II in 2009. The Japanese version had been released as a single game when it came out in 2008.
Plot
Ys II picks up immediately where Ys I left off. Adol Christin is transported to the floating island of Ys, where he meets a young woman named Lilia. She takes Adol to her home, Lance Village. It is here that he will begin his quest to unravel the sec |
https://en.wikipedia.org/wiki/ADALINE | ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) is an early single-layer artificial neural network and the name of the physical device that implemented this network. The network uses memistors. It was developed by professor Bernard Widrow and his doctoral student Ted Hoff at Stanford University in 1960. It is based on the perceptron. It consists of a weight, a bias and a summation function.
The difference between Adaline and the standard (McCulloch–Pitts) perceptron is in how they learn. Adaline unit weights are adjusted to match a teacher signal, before applying the Heaviside function (see figure), but the standard perceptron unit weights are adjusted to match the correct output, after applying the Heaviside function.
A multilayer network of ADALINE units is a MADALINE.
Definition
Adaline is a single layer neural network with multiple nodes where each node accepts multiple inputs and generates one output. Given the following variables as:
is the input vector
is the weight vector
is the number of inputs
some constant
is the output of the model
then we find that the output is . If we further assume that
then the output further reduces to:
Learning rule
The learning rule used by ADALINE is the LMS ("least mean squares") algorithm, a special case of gradient descent.
Define the following notations:
is the learning rate (some positive constant)
is the output of the model
is the target (desired) output
is the square of the error.
The LMS algorithm updates the weights by
This update rule minimizes , the square of the error, and is in fact the stochastic gradient descent update for linear regression.
MADALINE
MADALINE (Many ADALINE) is a three-layer (input, hidden, output), fully connected, feed-forward artificial neural network architecture for classification that uses ADALINE units in its hidden and output layers, i.e. its activation function is the sign function. The three-layer network uses memistors. Three different train |
https://en.wikipedia.org/wiki/Mortality%20Medical%20Data%20System | The Mortality Medical Data System (MMDS) is used to automate the entry, classification, and retrieval of cause-of-death information reported on death certificates throughout the United States and in many other countries. The National Center for Health Statistics (NCHS) began the system's development in 1967.
The system has facilitated the standardization of mortality information within the United States, and ACME has become the de facto international standard for the automated selection of the underlying cause of death from multiple conditions listed on a death certificate.
System components
The MMDS system consists of the following components, and is itself part of the National Vital Statistics System.
MICAR
There are two Mortality Medical Indexing, Classification, and Retrieval components.
SuperMICAR automates the MICAR data entry process. This program is designed as an enhancement of the earlier PC-MICAR Data Entry program. Super-MICAR is designed to automatically encode cause-of-death data into numeric entity reference numbers.
MICAR200 automates the multiple cause coding rules and assigns International Statistical Classification of Diseases and Related Health Problems (ICD) codes to each numeric entity reference number.
ACME
The Automated Classification of Medical Entities program automates the underlying cause-of-death coding rules. The input to ACME is the multiple cause-of-death codes (ICD) assigned to each entity (e.g., disease condition, accident, or injury) listed on cause-of-death certifications, preserving the location and order as reported by the certifier. ACME then applies the World Health Organization (WHO) rules to the ICD codes and selects an underlying cause of death. ACME has become the de facto international standard for the automated selection of the underlying cause of death.
TRANSAX
The TRANSlation of Axis program converts the ACME output data into fixed format and translates the data into a more desirable statistical form usin |
https://en.wikipedia.org/wiki/Xenomai | Xenomai is a real-time development software framework cooperating with the Linux kernel to provide pervasive, interface-agnostic, hard real-time computing support to user space application software seamlessly integrated into the Linux environment.
The Xenomai project was launched in August 2001. In 2003, it merged with the Real-Time Application Interface (RTAI) project to produce RTAI/fusion, a real-time free software platform for Linux on Xenomai's abstract real-time operating system (RTOS) core. Eventually, the RTAI/fusion effort became independent from RTAI in 2005 as the Xenomai project.
Xenomai is based on an abstract RTOS core, usable for building any kind of real-time interface, over a nucleus which exports a set of generic RTOS services. Any number of RTOS personalities called “skins” can then be built over the nucleus, providing their own specific interface to the applications, by using the services of a single generic core to implement it.
Xenomai vs. RTAI
Many differences exist between Xenomai and RTAI, though both projects share a few ideas and support the RTDM layer. The major differences derive from the goals the projects aim for, and from their respective implementation. While RTAI is focused on lowest technically feasible latencies, Xenomai also considers clean extensibility (RTOS skins), portability, and maintainability as very important goals. Xenomai's path towards Ingo Molnár's PREEMPT_RT support is another major difference compared to RTAI's objectives.
See also
Adaptive Domain Environment for Operating Systems (Adeos)
RTAI |
https://en.wikipedia.org/wiki/Suillus%20bovinus | Suillus bovinus, also known as the Jersey cow mushroom or bovine bolete, is a pored mushroom of the genus Suillus in the family Suillaceae. A common fungus native to Europe and Asia, it has been introduced to North America and Australia. It was initially described as Boletus bovinus by Carl Linnaeus in 1753, and given its current binomial name by Henri François Anne de Roussel in 1806. It is an edible mushroom, though not highly regarded.
The fungus grows in coniferous forests in its native range, and pine plantations in countries where it has become naturalised. It forms symbiotic ectomycorrhizal associations with living trees by enveloping the tree's underground roots with sheaths of fungal tissue, and is sometimes parasitised by the related mushroom Gomphidius roseus. Suillus bovinus produces spore-bearing fruit bodies, often in large numbers, above ground. The mushroom has a convex grey-yellow or ochre cap reaching up to in diameter, which flattens with age. Like other boletes, it has tubes extending downward from the underside of the cap, rather than gills; spores escape at maturity through the tube openings, or pores. The pore surface is yellow. The stipe, more slender than those of other Suillus boletes, lacks a ring.
Taxonomy and naming
Suillus bovinus was one of the many species first described in 1753 by the "father of taxonomy" Carl Linnaeus, who, in the second volume of his Species Plantarum, gave it the name Boletus bovinus. The specific epithet is derived from the Latin word bos, meaning "cattle". The fungus was reclassified in (and became the type species of) the genus Suillus by French naturalist Henri François Anne de Roussel in 1796. Suillus is an ancient term for fungi, and is derived from the word "swine". Lucien Quélet classified it as Viscipellis bovina in 1886.
In works published before 1987, the species was written fully as Suillus bovinus (L.:Fr.) Kuntze, as the description by Linnaeus had been name sanctioned in 1821 by the "father |
https://en.wikipedia.org/wiki/Dynamic/Dialup%20Users%20List | A Dial-up/Dynamic User List (DUL) is a type of DNSBL which contains the IP addresses an ISP assigns to its customer on a temporary basis, often using DHCP or similar protocols. Dynamically assigned IP addresses are contrasted with static IP addresses which do not change once they have been allocated by the service provider.
DULs serve several purposes. Their primary function is to assist an ISP in enforcement of its Acceptable Use Policy, many of which prohibit customers from setting up an email server. Customers are expected to use the email facilities of the service provider. This use of a DUL is especially helpful in curtailing abuse when a customer's computer has been converted into a zombie computer and is distributing email without the knowledge of the computer's owner. A second major use involves receivers who do not wish to accept email from computers with dynamically assigned email addresses. They use DULs to enforce this policy. Receivers adopt such policies because computers at dynamically assigned IP addresses so often are a source of spam.
The first DUL was created by Gordon Fecyk in 1998. It quickly became quite popular because it addressed a specific tactic popular with spammers at the time. The DUL subsequently was absorbed by Mail Abuse Prevention System (MAPS) in 1999. When MAPS was no longer a free service, other DNSBLs such as Dynablock, Not Just Another Bogus List (NJABL), and Spam and Open Relay Blocking System (SORBS) began providing lists of dynamically assigned IP addresses. |
https://en.wikipedia.org/wiki/NASU%20Institute%20of%20Cryobiology%20and%20Cryomedicine%20Issues | The Institute for Problems of Cryobiology and Cryomedicine in Kharkiv is one of the institutes of the National Academy of Science of Ukraine, and is the largest institute devoted to cryobiology research in the world.
Background
Established in 1972, the focus of the research is on cryoinjury, cryosurgery, cryopreservation, lyophilization and hypothermia. Since 1985 the Institute has published the open access peer-reviewed scientific journal Problems of Cryobiology and Cryomedicine.
See also
Cryobiology
National Academy of Science of Ukraine |
https://en.wikipedia.org/wiki/Internet%20safety | Internet safety, also known as online safety, cyber safety and electronic safety (e-safety), is the act of maximizing a user's awareness of personal safety and security risks to private information and property associated with using the Internet, and the self-protection from computer crime.
As the number of internet users continues to grow worldwide, internets, governments, and organizations have expressed concerns about the safety of children and teenagers and the elderly using the Internet. Over 45% have announced they have endured some sort of cyber-harassment. Safer Internet Day is celebrated worldwide in February to raise awareness about internet safety. In the UK the Get Safe Online campaign has received sponsorship from government agency Serious Organized Crime Agency (SOCA) and major Internet companies such as Microsoft and eBay.
Information security
Sensitive information such as personal information and identity, passwords are often associated with personal property and privacy and may present security concerns if leaked. Unauthorized access and usage of private information may result in consequences such as identity theft, as well as theft of property. Common causes of information security breaches include:
Phishing
Phishing is a type of scam where the scammers disguise themselves as trustworthy source in an attempt to obtain private information such as passwords, credit card information, etc. through the internet. These fake websites are often designed to look identical to their legitimate counterparts to avoid suspicion from the user. Normally, hackers will send third-party email to target requesting personal information, and they will use this as an entry point to implement attack.
Malware
Malware, particularly spyware, is malicious software designed to collect and transmit private information, such as passwords, without the user's consent or knowledge. They are often distributed through e-mail, software, and files from unofficial locations. Malware |
https://en.wikipedia.org/wiki/Andrographis%20paniculata | Andrographis paniculata, commonly known as creat or green chiretta, is an annual herbaceous plant in the family Acanthaceae, native to India and Sri Lanka.
It is widely cultivated in Southern and Southeastern Asia, where it has been believed to be a treatment for bacterial infections and some diseases. Mostly the leaves and roots have been used for such purposes; the whole plant is also used, in some cases.
Description
The plant grows as an erect herb to a height of in moist, shady places. The slender stem is dark green, square in cross-section with longitudinal furrows and wings along the angles. The lance-shaped leaves have hairless blades measuring up to long by . The small flowers are pink, solitary, arranged in lax spreading racemes or panicles. The fruit is a capsule around long and a few millimeters wide. It contains many yellow-brown seeds. The seeds are subquadrate, rugose and glabrous. The flowering time is September to December.
Distribution
The species is distributed in tropical Asian countries, often in isolated patches. It can be found in a variety of habitats, such as plains, hillsides, coastlines, and disturbed and cultivated areas such as roadsides and farms. Native populations of A. paniculata are spread throughout south India and Sri Lanka which perhaps represent the center of origin and diversity of the species. The herb is an introduced species in northern parts of India, Java, Malaysia, Indonesia, the West Indies, and elsewhere in the Americas. The species also occurs in the Philippines, Hong Kong, Thailand, Brunei, Singapore, and other parts of Asia where it may or may not be native. The plant is cultivated in many areas, as well.
Unlike other species of the genus, A. paniculata is of common occurrence in most places in India, including the plains and hilly areas up to , which accounts for its wide use.
In India the major source of plant is procured from its wild habitat. The plant is categorised as Low Risk or of Least Concern by the |
https://en.wikipedia.org/wiki/List%20of%20antioxidants%20in%20food | This is a list of antioxidants naturally occurring in food. Vitamin C and vitamin E – which are ubiquitous among raw plant foods – are confirmed as dietary antioxidants, whereas vitamin A becomes an antioxidant following metabolism of provitamin A beta-carotene and cryptoxanthin. Most food compounds listed as antioxidants – such as polyphenols common in colorful, edible plants – have antioxidant activity only in vitro, as their fate in vivo is to be rapidly metabolized and excreted, and the in vivo properties of their metabolites remain poorly understood. For antioxidants added to food to preserve them, see butylated hydroxyanisole and butylated hydroxytoluene.
Regulatory guidance
In the following discussion, the term "antioxidant" refers mainly to non-nutrient compounds in foods, such as polyphenols, which have antioxidant capacity in vitro and so provide an artificial index of antioxidant strength – the ORAC measurement. Other than for dietary antioxidant vitamins – vitamin A, vitamin C and vitamin E – no food compounds have been proved to be antioxidants in vivo. Accordingly, regulatory agencies like the Food and Drug Administration of the United States and the European Food Safety Authority (EFSA) have published guidance disallowing food product labels to claim an inferred antioxidant benefit when no such physiological evidence exists.
Physiological context
Despite the above discussion implying that ORAC-rich foods with polyphenols may provide antioxidant benefits when in the diet, there remains no physiological evidence that any polyphenols have such actions or that ORAC has any relevance in the human body.
On the contrary, research indicates that although polyphenols are antioxidants in vitro, antioxidant effects in vivo are probably negligible or absent. By non-antioxidant mechanisms still undefined, polyphenols may affect mechanisms of cardiovascular disease or cancer.
The increase in antioxidant capacity of blood seen after the consumption of polyphenol |
https://en.wikipedia.org/wiki/3-Amino-1%2C2%2C4-triazole | 3-Amino-1,2,4-triazole (3-AT) is a heterocyclic organic compound that consists of a 1,2,4-triazole substituted with an amino group.
3-AT is a competitive inhibitor of the product of the HIS3 gene, imidazoleglycerol-phosphate dehydratase. Imidazoleglycerol-phosphate dehydratase is an enzyme catalyzing the sixth step of histidine production.
3-AT is also a nonselective systemic triazole herbicide used on nonfood croplands to control annual grasses and broadleaf and aquatic weeds. It is not used on food crops because of its carcinogenic properties. As an herbicide, it is known as aminotriazole, amitrole or amitrol.
Amitrol was included in a biocide ban proposed by the Swedish Chemicals Agency and approved by the European Parliament on January 13, 2009.
Applications in microbiology
By applying 3-AT to a yeast cell culture which is dependent upon a plasmid containing HIS3 to produce histidine (i.e. its own HIS3 analogue is not present or nonfunctional), an increased level of HIS3 expression is required in order for the yeast cell to survive. This has proved useful in various two-hybrid system, where a high-affinity binding between two proteins (i.e., higher expression of the HIS3 gene) will allow the yeast cell to survive in media containing higher concentrations of 3-AT. This selection process is performed using selective media, containing no histidine.
1959 cranberry contamination
On November 9, 1959, the secretary of the United States Department of Health, Education, and Welfare Arthur S. Flemming announced that some of the 1959 crop was tainted with traces of the herbicide aminotriazole. The market for cranberries collapsed and growers lost millions of dollars. However, Ocean Spray recovered by expanding the market for cranberry juice, which, although widely available for sale, was before then not popular. This ensured cranberry growers would not have to rely mostly on Thanksgiving and Christmas for sales, which was the case until the notorious 1959 incident. |
https://en.wikipedia.org/wiki/HIS-selective%20medium | HIS-selective medium is a type cell culture medium that lacks the amino acid histidine. It can be used with bacteria reliant on the expression of a gene encoding proteins involved in histidine expression in order to survive. Only bacteria expressing such genes (such as hisB in Escherichia coli and HIS3 in Saccharomyces cerevisiae) will survive on these media. |
https://en.wikipedia.org/wiki/Passport%20Carrier%20Release | Passport Carrier Release (PCR) is a version of the Passport Switch (now Multiservice Switch) software designed to run in telecommunications carrier environments. It was formerly developed by Nortel. After the sale in 2009 of most Nortel's assets, the passport SW is still used in several products of Alcatel-Lucent, Ericsson and Kapsch.
The passport products are now part of Ericsson's PPX products under IP Networking. The Ericsson PPX products can be found at the following link.
Technologies
Internally, PCR is largely built up of custom applications on top of a VxWorks kernel. A benefit of the software is that it is completely modular and can load components to Passport control processors (CPs) and function processors (FPs) on an as-needed basis. FPs (also known as line cards) each run their own instance of the Operating System, and as such can be rebooted without the need to take the entire switch out of service, for example due to a software failure. As well, as would be expected, entire cards can be replaced while the system is hot, thus minimizing downtime due to hardware failure.
OAM and provisioning
PCR consists of a custom OAM interface that is highly object oriented. This reflects the modular nature of the operating system. The core component when provisioning IP networks is the Virtual Router (however, a Passport switch can act in more than IP environments). Interfaces are provisioned as "application" objects, which are then in turn connected to Protocol Ports on a Virtual Router. It is best to consider the environment in which provisioning occurs to be object oriented, in the sense that the behaviour of the router is defined based on which objects are instantiated and how they are related. Note that there is a change in this paradigm noticeable in recent releases of PCR -- MPLS, for example, now resides on an object called simply a Router with built-in IP features. This provisioning approach is more similar to that of Cisco or Juniper Networks.
Drawback |
https://en.wikipedia.org/wiki/Ys%20III%3A%20Wanderers%20from%20Ys | is a 1989 action role-playing game developed by Nihon Falcom. It is the third game in the Ys series.
Ys III was initially released for the PC-8801 and PC-9801 in 1989 under the title Wanderers from Ys, and versions for the MSX2 and X68000 soon followed. In 1991, a number of console ports were produced: versions for the TurboGrafx-CD, Famicom, Super NES, and Sega Genesis. A remake for the PlayStation 2 was released by Taito in 2005.
The TurboGrafx-CD, SNES, and Genesis ports, as well as the 2005 remake Ys: The Oath in Felghana, received releases in English. In addition, the Famicom and MSX2 ports have been fan-translated.
Plot
The opening scene informs the player that it has been three years since the events of Ys I and II. Adol Christin and his friend Dogi are on a journey. Passing through a town, they find a gypsy caravan and Dogi has his fortune told. The fortune teller's crystal ball explodes and both Adol and Dogi decide to go to Dogi's hometown of Redmont. On the way to Redmont, the pair chance upon a wildcat that attacks them, saving Dogi's childhood friend, Elena Stoddart, in the process. Upon arriving, they learn that the townspeople are being threatened by men from nearby Valestein Castle. Always ready for adventure, Adol decides to assume the task of helping them out.
Travelling to Tigray Quarry, Adol discovers that there has been a cave in, and monsters have taken over the quarry. Adol travels deep inside the quarry to rescue Edgar, the mayor of Redmont. After defeating the boss of the Quarry, Ellefale, and claiming the first statue, Adol finds Chester (brother of Elena) threatening Edgar. Adol pulls Edgar back out to the entrance of the quarry and goes back to Redmont. Meeting Elena (formally this time), Adol learns that some statues are being sought by Lord MacGuire, the King of Felghana province (where the game takes place). Another statue is being sought in the Illsburn Ruins, so Adol travels there to retrieve it. Another townsperson, Father Pierr |
https://en.wikipedia.org/wiki/Symmetrization | In mathematics, symmetrization is a process that converts any function in variables to a symmetric function in variables.
Similarly, antisymmetrization converts any function in variables into an antisymmetric function.
Two variables
Let be a set and be an additive abelian group. A map is called a if
It is called an if instead
The of a map is the map
Similarly, the or of a map is the map
The sum of the symmetrization and the antisymmetrization of a map is
Thus, away from 2, meaning if 2 is invertible, such as for the real numbers, one can divide by 2 and express every function as a sum of a symmetric function and an anti-symmetric function.
The symmetrization of a symmetric map is its double, while the symmetrization of an alternating map is zero; similarly, the antisymmetrization of a symmetric map is zero, while the antisymmetrization of an anti-symmetric map is its double.
Bilinear forms
The symmetrization and antisymmetrization of a bilinear map are bilinear; thus away from 2, every bilinear form is a sum of a symmetric form and a skew-symmetric form, and there is no difference between a symmetric form and a quadratic form.
At 2, not every form can be decomposed into a symmetric form and a skew-symmetric form. For instance, over the integers, the associated symmetric form (over the rationals) may take half-integer values, while over a function is skew-symmetric if and only if it is symmetric (as ).
This leads to the notion of ε-quadratic forms and ε-symmetric forms.
Representation theory
In terms of representation theory:
exchanging variables gives a representation of the symmetric group on the space of functions in two variables,
the symmetric and antisymmetric functions are the subrepresentations corresponding to the trivial representation and the sign representation, and
symmetrization and antisymmetrization map a function into these subrepresentations – if one divides by 2, these yield projection maps.
As the symmetric grou |
https://en.wikipedia.org/wiki/Reports%20of%20Streptococcus%20mitis%20on%20the%20Moon | As part of the Apollo 12 mission, the camera from the Surveyor 3 probe was brought back from the Moon to Earth. On analyzing the camera it was found that the common bacterium Streptococcus mitis was alive on the camera. This was attributed by NASA to the camera not being sterilized on Earth prior to its launch two and a half years previously. However, later study showed that the scientists analysing the camera on return to Earth used procedures that were inadequate to prevent recontamination after return to Earth, for instance with their arms exposed, not covering their entire bodies as modern scientists would do. There may also have been possibilities for contamination during the return mission as the camera was returned in a porous bag rather than the airtight containers used for lunar sample return. As a result, the source of the contamination remains controversial.
History
Since the Apollo Program, there has been at least one independent investigation into the validity of the NASA claim. Leonard D. Jaffe, a Surveyor program scientist and custodian of the Surveyor 3 parts brought back from the Moon, stated in a letter to the Planetary Society that a member of his staff reported that a "breach of sterile procedure" took place at just the right time to produce a false positive result. One of the implements being used to scrape samples off the Surveyor parts was laid down on a non-sterile laboratory bench, and then was used to collect surface samples for culturing. Jaffe wrote, "It is, therefore, quite possible that the microorganisms were transferred to the camera after its return to Earth, and that they had never been to the Moon." In 2007, NASA funded an archival study that sought the film of the camera-body microbial sampling, to confirm the report of a breach in sterile technique.
The bacterial test is now non-repeatable because the parts were subsequently taken out of quarantine and fully re-exposed to terrestrial conditions (the Surveyor 3 camera is now |
https://en.wikipedia.org/wiki/GPS-aided%20GEO%20augmented%20navigation | The GPS-aided GEO augmented navigation (GAGAN) is an implementation of a regional satellite-based augmentation system (SBAS) by the Government of India. It is a system to improve the accuracy of a GNSS receiver by providing reference signals. The Airports Authority of India (AAI)'s efforts towards implementation of operational SBAS can be viewed as the first step towards introduction of modern communication, navigation and surveillance / air traffic management system over the Indian airspace.
The project has established fifteen Indian reference stations, three Indian navigation land uplink stations, three Indian mission control centres, and installation of all associated software and communication links. It will be able to help pilots to navigate in the Indian airspace by an accuracy of This will be helpful for landing aircraft in marginal weather and difficult approaches like Mangalore International and Kushok Bakula Rimpochee airports.
Implementation
The project was created in three phases through 2008 by the Airports Authority of India with the help of the Indian Space Research Organisation's (ISRO) technology and space support. The goal is to provide navigation system for all phases of flight over the Indian airspace and in the adjoining area. It is applicable to safety-to-life operations, and meets the performance requirements of international civil aviation regulatory bodies.
The space component became available after the launch of the GAGAN payload on the GSAT-8 communication satellite, which was successfully launched. This payload was also part of the GSAT-4 satellite that was lost when the Geosynchronous Satellite Launch Vehicle (GSLV) failed during launch in April 2010. A final system acceptance test was conducted during June 2012 followed by system certification during July 2013.
All aircraft being registered in India after 1 July 2021 are mandated to be outfitted with GAGAN equipment.
Technology
To begin implementing a satellite-based augmentati |
https://en.wikipedia.org/wiki/Robust%20optimization | Robust optimization is a field of mathematical optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution. It is related to, but often distinguished from, probabilistic optimization methods such as chance-constrained optimization.
History
The origins of robust optimization date back to the establishment of modern decision theory in the 1950s and the use of worst case analysis and Wald's maximin model as a tool for the treatment of severe uncertainty. It became a discipline of its own in the 1970s with parallel developments in several scientific and technological fields. Over the years, it has been applied in statistics, but also in operations research, electrical engineering, control theory, finance, portfolio management logistics, manufacturing engineering, chemical engineering, medicine, and computer science. In engineering problems, these formulations often take the name of "Robust Design Optimization", RDO or "Reliability Based Design Optimization", RBDO.
Example 1
Consider the following linear programming problem
where is a given subset of .
What makes this a 'robust optimization' problem is the clause in the constraints. Its implication is that for a pair to be admissible, the constraint must be satisfied by the worst pertaining to , namely the pair that maximizes the value of for the given value of .
If the parameter space is finite (consisting of finitely many elements), then this robust optimization problem itself is a linear programming problem: for each there is a linear constraint .
If is not a finite set, then this problem is a linear semi-infinite programming problem, namely a linear programming problem with finitely many (2) decision variables and infinitely many constraints.
Classification
There are a number of classification criteri |
https://en.wikipedia.org/wiki/Datapath | A data path is a collection of functional units such as arithmetic logic units (ALUs) or multipliers that perform data processing operations, registers, and buses. Along with the control unit it composes the central processing unit (CPU). A larger data path can be made by joining more than one data paths using multiplexers.
A data path is the ALU, the set of registers, and the CPU's internal bus(es) that allow data to flow between them.
The simplest design for a CPU uses one common internal bus.
Efficient addition requires a slightly more complicated three-internal-bus structure.
Many relatively simple CPUs have a 2-read, 1-write register file
connected to the 2 inputs and 1 output of the ALU.
During the late 1990s, there was growing research in the area of reconfigurable data paths—data paths that may be re-purposed at run-time using programmable fabric—as such designs may allow for more efficient processing as well as substantial power savings.
Finite state machine with data path
A finite-state machine with data path (FSMD) is a mathematical abstraction which combines a finite-state machine, which controls the program flow, with a data path. It can be used to design digital logic or computer programs.
FSMDs are essentially sequential programs in which statements have been scheduled into states, thus resulting in more complex state diagrams. Here, a program is converted into a complex state diagram in which states and arcs may include arithmetic expressions, and those expressions may use external inputs and outputs as well as variables. The FSMD level of abstraction is often referred to as the register-transfer level.
FSMs do not use variables or arithmetic operations/conditions, thus FSMDs are more powerful than FSMs. An FSMD is equivalent to a Turing machine in expressiveness. |
https://en.wikipedia.org/wiki/Loomis%E2%80%93Whitney%20inequality | In mathematics, the Loomis–Whitney inequality is a result in geometry, which in its simplest form, allows one to estimate the "size" of a -dimensional set by the sizes of its -dimensional projections. The inequality has applications in incidence geometry, the study of so-called "lattice animals", and other areas.
The result is named after the American mathematicians Lynn Harold Loomis and Hassler Whitney, and was published in 1949.
Statement of the inequality
Fix a dimension and consider the projections
For each 1 ≤ j ≤ d, let
Then the Loomis–Whitney inequality holds:
Equivalently, taking we have
implying
A special case
The Loomis–Whitney inequality can be used to relate the Lebesgue measure of a subset of Euclidean space to its "average widths" in the coordinate directions. This is in fact the original version published by Loomis and Whitney in 1949 (the above is a generalization).
Let E be some measurable subset of and let
be the indicator function of the projection of E onto the jth coordinate hyperplane. It follows that for any point x in E,
Hence, by the Loomis–Whitney inequality,
and hence
The quantity
can be thought of as the average width of in the th coordinate direction. This interpretation of the Loomis–Whitney inequality also holds if we consider a finite subset of Euclidean space and replace Lebesgue measure by counting measure.
The following proof is the original one
Corollary. Since , we get a loose isoperimetric inequality:
Iterating the theorem yields and more generallywhere enumerates over all projections of to its dimensional subspaces.
Generalizations
The Loomis–Whitney inequality is a special case of the Brascamp–Lieb inequality, in which the projections πj above are replaced by more general linear maps, not necessarily all mapping onto spaces of the same dimension. |
https://en.wikipedia.org/wiki/Plantar%20venous%20arch | The four plantar metatarsal veins run backward in the metatarsal spaces, communicate, by means of perforating veins, with the veins on the dorsum of the foot, and unite to form the plantar venous arch (or deep plantar venous arch) which lies alongside the plantar arterial arch.
From the deep plantar venous arch the medial and lateral plantar veins run backward close to the corresponding arteries and, after communicating with the great and small saphenous veins, unite behind the medial malleolus to form the posterior tibial veins. |
https://en.wikipedia.org/wiki/Heterotopia%20%28medicine%29 | In medicine, heterotopia is the presence of a particular tissue type at a non-physiological site, but usually co-existing with original tissue in its correct anatomical location. In other words, it implies ectopic tissue, in addition to retention of the original tissue type.
Examples
In neuropathology, for example, gray matter heterotopia is the presence of gray matter within the cerebral white matter or ventricles. Heterotopia within the brain is often divided into three groups: subependymal heterotopia, focal cortical heterotopia and band heterotopia. Another example is a Meckel's diverticulum, which may contain heterotopic gastric or pancreatic tissue.
In biology specifically, heterotopy refers to an altered location of trait expression. In her book Developmental Plasticity and Evolution, Mary-Jane West Eberhard has a cover art of the sulphur crested cockatoo and comments on the back cover "Did long crest[head] feathers evolve by gradual modification of ancestral head feathers? Or are they descendants of wing feathers, developmentally transplanted onto the head". This idea sets the tone for the rest of her book which goes into depth about developmental novelties and their relation to evolution. Heterotopy is a somewhat obscure but well demonstrated example of how developmental change can lead to novel forms. The central concept is that a feature seen in one area of an organism has had its location changed in evolutionary lineages.
Heterotopy in molecular biology
Heterotopy in molecular biology is the name given to the expression or placement of a gene product from what is typically found in one area to another area. It can also be further expanded to a subtle form of exaptation where a gene product used for one underlying purpose in a diverse group of organisms can re-emerge repeatedly to produce seemingly paraphyletic distributions of traits. But actual phylogenetic analysis supports a monophyletic model as does evolutionary theory. Heterotopy is used to ex |
https://en.wikipedia.org/wiki/Procedural%20reasoning%20system | In artificial intelligence, a procedural reasoning system (PRS) is a framework for constructing real-time reasoning systems that can perform complex tasks in dynamic environments. It is based on the notion of a rational agent or intelligent agent using the belief–desire–intention software model.
A user application is predominately defined, and provided to a PRS system is a set of knowledge areas. Each knowledge area is a piece of procedural knowledge that specifies how to do something, e.g., how to navigate down a corridor, or how to plan a path (in contrast with robotic architectures where the programmer just provides a model of what the states of the world are and how the agent's primitive actions affect them). Such a program, together with a PRS interpreter, is used to control the agent.
The interpreter is responsible for maintaining beliefs about the world state, choosing which goals to attempt to achieve next, and choosing which knowledge area to apply in the current situation. How exactly these operations are performed might depend on domain-specific meta-level knowledge areas. Unlike traditional AI planning systems that generate a complete plan at the beginning, and replan if unexpected things happen, PRS interleaves planning and doing actions in the world. At any point, the system might only have a partially specified plan for the future.
PRS is based on the BDI or belief–desire–intention framework for intelligent agents. Beliefs consist of what the agent believes to be true about the current state of the world, desires consist of the agent's goals, and intentions consist of the agent's current plans for achieving those goals. Furthermore, each of these three components is typically explicitly represented somewhere within the memory of the PRS agent at runtime, which is in contrast to purely reactive systems, such as the subsumption architecture.
History
The PRS concept was developed by the Artificial Intelligence Center at SRI International duri |
https://en.wikipedia.org/wiki/Tree-graded%20space | A geodesic metric space is called a tree-graded space with respect to a collection of connected proper subsets called pieces, if any two distinct pieces intersect in at most one point, and every non-trivial simple geodesic triangle of is contained in one of the pieces.
If the pieces have bounded diameter, tree-graded spaces behave like real trees in their coarse geometry (in the sense of Gromov), while allowing non-tree-like behavior within the pieces.
Tree-graded spaces were introduced by in their study of the asymptotic cones of hyperbolic groups. |
https://en.wikipedia.org/wiki/Silk%20screen%20effect | The silk screen effect (SSE) is a visual phenomenon seen in rear-projection televisions. SSE is described by viewers as seeing the texture of the television screen in front of the image. SSE may be found on all rear-projection televisions including DLP and Liquid Crystal on Silicon (LCoS). The effect is most visible when viewing bright white or very light colored images. Viewers also report seeing "sparkles" when viewing very bright colored images.
SSE's nomenclature comes from the visual appearance of this effect, which is likened to viewing an image through a silk screen. SSE should not be confused with the screen door effect, another visual phenomenon seen in rear-projection televisions.
Cause of SSE
SSE is caused by textured screens used in most rear-projection televisions. Rear-projection television manufacturers use textured screens to increase the viewing angle of the television.
Reducing SSE
SSE can be reduced by properly calibrating the picture controls of the rear-projection television. SSE is most prominent when the contrast and brightness are set too high. Adjusting the brightness and contrast and properly calibrating the picture controls can reduce SSE.
See also
Digital Light Processing
Liquid Crystal on Silicon
Screen-door effect |
https://en.wikipedia.org/wiki/Gastric%20mucosa | The gastric mucosa is the mucous membrane layer of the stomach, which contains the glands and the gastric pits. In humans, it is about 1 mm thick, and its surface is smooth, soft, and velvety. It consists of simple columnar epithelium, lamina propria, and the muscularis mucosae.
Description
In its fresh state, it is of a pinkish tinge at the pyloric end and of a red or reddish-brown color over the rest of its surface. In infancy it is of a brighter hue, the vascular redness being more marked.
It is thin at the cardiac extremity, but thicker toward the pylorus. During the contracted state of the organ it is thrown into numerous plaits or rugae, which, for the most part, have a longitudinal direction, and are most marked toward the pyloric end of the stomach, and along the greater curvature. These folds are entirely obliterated when the organ becomes distended.
When examined with a lens, the inner surface of the mucous membrane presents a peculiar honeycomb appearance from being covered with funnel-like depressions or foveolae of a polygonal or hexagonal form, which vary from 0.12 to 0.25 mm. in diameter. These are the ducts of the gastric glands, and at the bottom of each may be seen one or more minute orifices, the openings of the gland tubes. Gastric glands are simple or branched tubular glands that emerge on the deeper part of the gastric foveola, inside the gastric areas and outlined by the folds of the mucosa.
Types of glands
There are three types of glands: cardiac glands (in the proximal part of the stomach), fundic (oxyntic) glands (the dominating type of gland), and pyloric glands.
The cardiac glands mainly contain mucus-producing cells called foveolar cells.
The bottom part of the oxyntic glands is dominated by zymogenic (chief) cells that produce pepsinogen (an inactive precursor of the pepsin enzyme). Parietal cells, which secrete hydrochloric acid (HCl) are scattered in the glands, with most of them in the middle part. The upper part of the glands c |
https://en.wikipedia.org/wiki/Scherk%20surface | In mathematics, a Scherk surface (named after Heinrich Scherk) is an example of a minimal surface. Scherk described two complete embedded minimal surfaces in 1834; his first surface is a doubly periodic surface, his second surface is singly periodic. They were the third non-trivial examples of minimal surfaces (the first two were the catenoid and helicoid). The two surfaces are conjugates of each other.
Scherk surfaces arise in the study of certain limiting minimal surface problems and in the study of harmonic diffeomorphisms of hyperbolic space.
Scherk's first surface
Scherk's first surface is asymptotic to two infinite families of parallel planes, orthogonal to each other, that meet near z = 0 in a checkerboard pattern of bridging arches. It contains an infinite number of straight vertical lines.
Construction of a simple Scherk surface
Consider the following minimal surface problem on a square in the Euclidean plane: for a natural number n, find a minimal surface Σn as the graph of some function
such that
That is, un satisfies the minimal surface equation
and
What, if anything, is the limiting surface as n tends to infinity? The answer was given by H. Scherk in 1834: the limiting surface Σ is the graph of
That is, the Scherk surface over the square is
More general Scherk surfaces
One can consider similar minimal surface problems on other quadrilaterals in the Euclidean plane. One can also consider the same problem on quadrilaterals in the hyperbolic plane. In 2006, Harold Rosenberg and Pascal Collin used hyperbolic Scherk surfaces to construct a harmonic diffeomorphism from the complex plane onto the hyperbolic plane (the unit disc with the hyperbolic metric), thereby disproving the Schoen–Yau conjecture.
Scherk's second surface
Scherk's second surface looks globally like two orthogonal planes whose intersection consists of a sequence of tunnels in alternating directions. Its intersections with horizontal planes consists of alternating hyperbolas.
I |
https://en.wikipedia.org/wiki/Coccygeal%20glomus | The coccygeal glomus (coccygeal gland or body; Luschka’s gland) is a vestigial structure placed in front of, or immediately below, the tip of the coccyx.
Anatomy
It is about 2.5 mm. in diameter and is irregularly oval in shape; several smaller nodules are found around or near the main mass.
It consists of irregular masses of round or polyhedral cells epitheloid cells, which are grouped around a dilated sinusoidal capillary vessel.
Each cell contains a large round or oval nucleus, the protoplasm surrounding which is clear, and is not stained by chromic salts. Since it is not stained by chromic salts, it is not truly a part of Chromafin system; viz. the system which includes cells stained by chromic salts, consisting of renal medulla, para ganglia, and para aortic bodies.
It is situated near the ganglion impar in pelvis, and also at the termination of median sacral artery.
Clinical significance
It may appear similar to a glomus tumor. |
https://en.wikipedia.org/wiki/HP%20Integrity%20Virtual%20Machines | Integrity Virtual Machines is a hypervisor from Hewlett Packard Enterprise for HPE Integrity Servers running HP-UX. It is part of HP's Virtual Server Environment suite, and is optimized for server use.
History
Christophe de Dinechin initiated a skunkworks project to virtualize Itanium, with the help of Jean-Marc Chevrot and of a "virtual team" of experienced HP engineers. A prototype of Integrity Virtual Machines was then developed between 2000 and 2003 by Christophe de Dinechin, Todd Kjos and Jonathan Ross. It was then turned into a full-fledged product by a larger team of experienced OpenVMS, Tru64 Unix and HP-UX kernel engineers.
Version 1.0 and 1.2, released in 2005, ran HP-UX in virtual machines.
Version 2.0, released in November 2006, additionally supports Windows Server 2003, CD and DVD burners, tape drives and VLAN.
Version 3.0, released in June 2007, supports Linux Red Hat Enterprise Linux
Version 3.5, released in late 2007, supports SUSE Linux Enterprise Server, HP-UX 11i v3 guests, new service packs for Windows and Linux guests, and accelerated virtual I/O for HP-UX guests, enabling better I/O performance and a larger number of devices.
Version 4.0, released in September 2008, runs on HP-UX 11.31 (also known as 11i v3), supports 8 virtual CPUs, capped CPU allocation (in addition to CPU entitlement as in previous releases), additional support for accelerated virtual I/O (AVIO), and a new VM performance analysis tool. Version 4.0 also includes beta functionality such as on-line migration and support for OpenVMS guests.
Version 4.1, released in April 2009, supports Online VM Migration which allows customers to migrate active guests from one VM Host to another VM Host without service interruption. It also provides support for SSH third-party alternatives for secure communications, accelerated virtual I/O (AVIO) for networking on Windows and Linux guests, support for ignite and VxVM backing stores.
Version 4.2, released March 2010, supports encryptio |
https://en.wikipedia.org/wiki/Paraganglion | A paraganglion (pl. paraganglia) is a group of non-neuronal cells derived of the neural crest. They are named for being generally in close proximity to sympathetic ganglia. They are essentially of two types: (1) chromaffin or sympathetic paraganglia made of chromaffin cells and (2) nonchromaffin or parasympathetic paraganglia made of glomus cells. They are neuroendocrine cells, the former with primary endocrine functions and the latter with primary chemoreceptor functions.
Chromaffin paraganglia (also called chromaffin bodies) are connected with the ganglia of the sympathetic trunk and the ganglia of the celiac, renal, adrenal, aortic and hypogastric plexuses. They are concentrated near the adrenal glands and essentially function the same way as the adrenal medulla. They are sometimes found in connection with the ganglia of other sympathetic plexuses. None have been found with the sympathetic ganglia associated with the branches of the trigeminal nerve. The largest chromaffin paraganglion is the organ of Zuckerkandl, it is probably the largest source of circulating catecholamines in the fetus and young infants, and gradually atrophies to microscopic loci.
Nonchromaffin paraganglia include carotid bodies and aortic bodies, some are distributed in the ear, along the vagus nerve, in the larynx and at various other places.
Clinical significance
Tumors of the paraganglionic tissues are known as paragangliomas, though this term tends to imply the nonchromaffin type, and can occur at a number of sites throughout the body.
Chromaffin paragangliomas are issued from chromaffin cells, and are known as pheochromocytomas. Adrenal pheochromocytomas are usually benign while extra-adrenal ones are more malignant. They are most of the time in the adrenals, and only rarely outside of the abdomen. They usually secrete hormones and estimates of a familial history vary.
Nonchromaffin paragangliomas are usually benign. They are generally present at the head and neck, most often at c |
https://en.wikipedia.org/wiki/AboutUs.com | AboutUs.com is a wiki Internet domain directory. It lists websites along with information about their content. As a wiki, AboutUs allows Internet users to add entries or modify information. AboutUs.com has since become a wiki for more than just websites. The site now allows pages to be created for people, places, and almost anything else.
Ray King, Jay Westerdal, and Paul Stahura founded AboutUs in 2005. Later in 2006 a small staff of five people in Portland, Oregon, United States developed out the site. The staff expanded to more than thirty people and two continents with an office in Lahore, Pakistan. From May 2007 to early 2011, Ward Cunningham, developer of the first wiki, was the chief technology officer of AboutUs.
AboutUs attracted at least 1.4 million U.S. visitors in July 2008. They used to use the domain name "AboutUs.org", but moved to new site under "AboutUs.com" in May 2010. Traffic and revenue started declining sharply and in 2013 AboutUs.com and its assets were sold to Omkarasana LLC, a Colorado Limited Liability Company located in Denver. The new company has since redesigned the website, migrated its infrastructure over to Amazon Web Services, and increased visitors and revenue. There are now approximately four individuals that actively operate the site.
Website contents
There were more than ten million entries on AboutUs.com as of March 2008, and new pages are added at a rate of about 25,000 a day. Most entries were created by a web robot (bot); web searches by users direct the bot to create a page for a web domain. In many cases the content is simply a republication of the contents of the "About us", "About me", or similar page on the website. Such pages typically describe the entity that owns the site, and may include self-promotional information, which AboutUs.com does not restrict. In many other cases the content of an entry consists of the whois data for the website. As of February 2014, there were more than 20 million entries.
Data use
Som |
https://en.wikipedia.org/wiki/Sommerfeld%E2%80%93Kossel%20displacement%20law | The Sommerfeld–Kossel displacement law states that the first spark (singly ionized) spectrum of an element is similar in all details to the arc (neutral) spectrum of the element preceding it in the periodic table. Likewise, the second (doubly ionized) spark spectrum of an element is similar in all details to the first (singly ionized) spark spectrum of the element preceding it, or to the arc (neutral) spectrum of the element with atomic number two less, and so forth.
Hence, the spectra of C I (neutral carbon), N II (singly ionized nitrogen), and O III (doubly ionized oxygen) atoms are similar, apart from shifts of the spectra to shorter wavelengths. C I, N II, and O III all have the same number of electrons, six, and the same ground-state electron configuration:
.
The law was discovered by and named after Arnold Sommerfeld and Walther Kossel, who set it forth in a paper submitted to Verhandungen der Deutschen Physikalischen Gesellschaft in early 1919. |
https://en.wikipedia.org/wiki/Cardinality%20%28data%20modeling%29 | Within data modelling, cardinality is the numerical relationship between rows of one table and rows in another. Common cardinalities include one-to-one, one-to-many, and many-to-many. Cardinality can be used to define data models as well as analyze entities within datasets.
Relationships
For example, consider a database of electronic health records. Such a database could contain tables like the following:
A doctor table with information about physicians.
A patient table for medical subjects undergoing treatment.
An encounter table with an entry for each hospital visit.
Natural relationships exist between these entities, such as an encounter involving many doctors. There is a many-to-many relationship between records in doctor and records in patient because doctors have many patients and patients can see many doctors. There is a one-to-many relationship between records in patient and records in encounter because patients can have many encounters and each encounter involves only one patient.
A "one-to-one" relationship is mostly used to split a table in two in order to provide information concisely and make it more understandable. In the hospital example, such a relationship could be used to keep apart doctors' own unique professional information from administrative details.
Modeling
In data modeling, collections of data elements are grouped into "data tables" which contain groups of data field names called "database attributes". Tables are linked by "key fields". A "primary key" assigns a field to its "special order table". For example, the "Doctor Last Name" field might be assigned as a primary key of the Doctor table with all people having same last name organized alphabetically according to the first three letters of their first name. A table can also have a foreign key which indicates that field is linked to the primary key of another table.
Types of Models
A complex data model can involve hundreds of related tables. Computer scientist Edgar F. Cod |
https://en.wikipedia.org/wiki/Conscious%20breathing | Conscious breathing is an umbrella term for methods that direct awareness to the breath. These methods may have the goal of improving breathing, or the primary goal can be to build mindfulness. Human respiration is controlled consciously or unconsciously.
Training methods
Pranayama is part of the Yoga tradition and mainly deals with exercises, such as prolonging in- and outbreaths, holding pauses on the in- or outbreath or both, alternate nostril breathing, or breathing with the glottis slightly engaged etc.
The Buteyko method focuses on nasal breathing, relaxation and reduced breathing. These techniques provide the lungs with more NO and thus dilate the airways and should prevent the excessive exhalation of and thus improve oxygen metabolism.
Coherent Breathing is a method that involves breathing at the rate of five breaths per minute with equal periods of inhalation and exhalation and conscious relaxation of anatomical zones.
Applications
Meditation
Conscious breathing in meditation usually does not change the depth or rhythm of breathing, but uses breathing as an anchor for concentration and awareness.
Mindfulness and Awareness Trainings use conscious breathing for training awareness and body consciousness.
Vipassana Meditation focuses on breathing in and around the nose to calm the mind (anapanasati).
Psychology and psycho-therapy
Many breathwork methods claim that breathing can be used to access nonverbal memories.
Rebirthing uses conscious breathing to purge repressed birth memories and traumatic childhood memories.
Holotropic Breathing was developed by Stanislav Grof and uses deepened breathing to allow access to non-ordinary states of consciousness.
Transformational Breath uses a full relaxed breath that originates in the lower abdomen and repeats inhalation and exhalation without pausing. It integrates other healing modalities and breath analysis. A key feature is intensive personal coaching and the use of 'bodymapping' (acupressure points).
I |
https://en.wikipedia.org/wiki/Fruit%20rot | Fruit rot disease may refer to:
Phomopsis leaf caused in grapes by Phomopsis viticola;
Kole-roga caused in coconut and betel nut by Phytophthora palmivora;
Botrytis bunch rot caused by Botrytis cinerea primarily in grapes;
Black mold caused by Aspergillus niger;
Leaf spot, and others, caused by Alternaria alternata;
Bitter rot caused by Glomerella cingulata;
Cladosporium rot or Soft rot caused by Cladosporium cladosporioides;
Kernel rot or Fusariosis on maize (corn) caused by Fusarium sporotrichioides;
Sour rot caused by Geotrichum candidum;
Penicillium rot or Blue-eye caused by Penicillium chrysogenum;
Soft rot or Blue mold caused by Penicillium expansum;
Brown rot caused by Monilinia fructicola;
Strawberry fruit rot caused by Pestalotia longisetula |
https://en.wikipedia.org/wiki/History%20of%20model%20organisms | The history of model organisms began with the idea that certain organisms can be studied and used to gain knowledge of other organisms or as a control (ideal) for other organisms of the same species. Model organisms offer standards that serve as the authorized basis for comparison of other organisms. Model organisms are made standard by limiting genetic variance, creating, hopefully, this broad applicability to other organisms.
The idea of the model organism first took root in the middle of the 19th century with the work of men like Charles Darwin and Gregor Mendel and their respective work on natural selection and the genetics of heredity. As the first model organisms were introduced into labs in the 20th century, these early efforts to identify standards to measure organisms against persisted. Beginning in the early 1900s Drosophila entered the research laboratories and opened up the doors for other model organisms like tobacco mosaic virus, E. coli, C57BL/6 (lab mice), etc. These organisms have led to many advances in the past century.
Preliminary works on model organisms
Some of the first work with what would be considered model organisms started because Gregor Johann Mendel felt that the views of Darwin were insufficient in describing the formation of a new species and he began his work with the pea plants that are so famously known today. In his experimentation to find a method by which Darwin's ideas could be explained he hybridized and cross-bred the peas and found that in so doing he could isolate phenotypic characteristics of the peas. These discoveries made in the 1860s lay dormant for nearly forty years until they were rediscovered in 1900. Mendel's work was then correlated with what was being called chromosomes within the nucleus of each cell. Mendel created a practical guide to breeding and this method has successfully been applied to select for some of the first model organisms of other genus and species such as Guinea pigs, Drosophila (fruit |
https://en.wikipedia.org/wiki/Bug%20compatibility | Computer hardware or software is said to be bug compatible if it exactly replicates even an undesirable feature of a previous version. The phrase is found in the Jargon File.
An aspect of maintaining backward compatibility with an older system is that such systems' client programs often do not only depend on their specified interfaces but also bugs and unintended behaviour. That must also be preserved by the newer replacement. Besides the significantly higher complexity that needs to be maintained during the natural evolution of the code or interface, it can sometimes cause performance or security issues, and the inconsistencies in the behaviour of interfaces can sometimes lead to new bugs in the software using it, creating difficult to resolve multi-directional cross dependencies between various pieces of code.
Examples
DOS
Examples can be found in MS-DOS/PC DOS:
When MS-DOS/PC DOS 3.1 and higher (including Windows 9x) and OS/2 detect certain FAT OEM labels, they do not trust some BIOS Parameter Block (BPB) values and recalculate them from other disk geometry parameters in order to work around several off-by-one calculation errors caused by some of their formatter software under earlier issues of these systems. While this undocumented behaviour allows them to cope with these incorrectly formatted volumes specifically, it limits the flexibility of disk geometries they can work with in general and can cause them to trash validly formatted volumes created by third-parties if they deviate from the defaults used by Microsoft and IBM.
When MS-DOS/PC DOS 5.0 and higher are running on 286 or higher processors, the resident executable loader contains code specially designed to detect and fix certain widespread applications and stub loaders (such as programs linked with older versions of Microsoft's EXEPACK or Rational Systems' 386 DOS extenders) by patching the loaded program image before executing it. Under certain conditions an underlying DOS also patches Windows (WINA |
https://en.wikipedia.org/wiki/Schoen%E2%80%93Yau%20conjecture | In mathematics, the Schoen–Yau conjecture is a disproved conjecture in hyperbolic geometry, named after the mathematicians Richard Schoen and Shing-Tung Yau.
It was inspired by a theorem of Erhard Heinz (1952). One method of disproof is the use of Scherk surfaces, as used by Harold Rosenberg and Pascal Collin (2006).
Setting and statement of the conjecture
Let be the complex plane considered as a Riemannian manifold with its usual (flat) Riemannian metric. Let denote the hyperbolic plane, i.e. the unit disc
endowed with the hyperbolic metric
E. Heinz proved in 1952 that there can exist no harmonic diffeomorphism
In light of this theorem, Schoen conjectured that there exists no harmonic diffeomorphism
(It is not clear how Yau's name became associated with the conjecture: in unpublished correspondence with Harold Rosenberg, both Schoen and Yau identify Schoen as having postulated the conjecture). The Schoen(-Yau) conjecture has since been disproved.
Comments
The emphasis is on the existence or non-existence of an harmonic diffeomorphism, and that this property is a "one-way" property. In more detail: suppose that we consider two Riemannian manifolds M and N (with their respective metrics), and write
if there exists a diffeomorphism from M onto N (in the usual terminology, M and N are diffeomorphic). Write
if there exists an harmonic diffeomorphism from M onto N. It is not difficult to show that (being diffeomorphic) is an equivalence relation on the objects of the category of Riemannian manifolds. In particular, is a symmetric relation:
It can be shown that the hyperbolic plane and (flat) complex plane are indeed diffeomorphic:
so the question is whether or not they are "harmonically diffeomorphic". However, as the truth of Heinz's theorem and the falsity of the Schoen–Yau conjecture demonstrate, is not a symmetric relation:
Thus, being "harmonically diffeomorphic" is a much stronger property than simply being diffeomorphic, and can be a "one-way" r |
https://en.wikipedia.org/wiki/Gracenote | Gracenote, Inc. is a company and service that provides music, video and sports metadata and automatic content recognition (ACR) technologies to entertainment services and companies, worldwide. Formerly CDDB ("Compact Disc Data Base"), Gracenote maintains and licenses an Internet-accessible database containing information about the contents of audio compact discs and vinyl records. From 2008 to 2014, it was owned by Sony, later sold to Tribune Media, and has been owned since 2017 by Nielsen Holdings.
History
Gracenote began in 1993 as an open-source project involving a CD player program named xmcd and an associated database named CDDB. xmcd and CDDB were created by Ti Kan and Steve Scherf. Because CDs do not contain any digitally-encoded information about their contents, Kan and Scherf devised a technology that identifies and looks up CDs based on TOC information stored at the beginning of each disc. A TOC, or Table of Contents, is a list of offsets corresponding to the start of each track on a CD. Its original database was created from and continues to receive voluntary contributions from users. This led to a licensing controversy when Gracenote became commercialized.
On April 22, 2008, Sony announced that it would acquire Gracenote for $260 million. The acquisition was completed on June 2, 2008.
On September 9, 2010, Gracenote received its one-billionth piece of data, with a submission about the Compact Disc release of Swans' My Father Will Guide Me Up a Rope to the Sky.
On December 23, 2013, Sony announced it would sell Gracenote to Tribune Media for $170 million. The acquisition closed in February 2014: Gracenote was aligned with the Tribune Media Services division which focused on TV and Movie metadata and IDs.
On June 12, 2014, Tribune Media Services merged with Gracenote to form one company under the Gracenote name.
On July 9, 2014, Tribune Media Company purchased What's-ON, a provider of TV data and advanced search offerings covering India and the Midd |
https://en.wikipedia.org/wiki/Decomposition%20matrix | In mathematics, and in particular modular representation theory, a decomposition matrix is a matrix that results from writing the irreducible ordinary characters in terms of the irreducible modular characters, where the entries of the two sets of characters are taken to be over all conjugacy classes of elements of order coprime to the characteristic of the field. All such entries in the matrix are non-negative integers. The decomposition matrix, multiplied by its transpose, forms the Cartan matrix, listing the composition factors of the projective modules. |
https://en.wikipedia.org/wiki/EXCLAIM | The EXtensible Cross-Linguistic Automatic Information Machine (EXCLAIM) was an integrated tool for cross-language information retrieval (CLIR), created at the University of California, Santa Cruz in early 2006, with some support for more than a dozen languages. The lead developers were Justin Nuger and Jesse Saba Kirchner.
Early work on CLIR depended on manually constructed parallel corpora for each pair of languages. This method is labor-intensive compared to parallel corpora created automatically. A more efficient way of finding data to train a CLIR system is to use matching pages on the web which are written in different languages.
EXCLAIM capitalizes on the idea of latent parallel corpora on the web by automating the alignment of such corpora in various domains. The most significant of these is Wikipedia itself, which includes articles in 250 languages. The role of EXCLAIM is to use semantics and linguistic analytic tools to align the information in these Wikipedias so that they can be treated as parallel corpora. EXCLAIM is also extensible to incorporate information from many other sources, such as the Chinese Community Health Resource Center (CCHRC).
One of the main goals of the EXCLAIM project is to provide the kind of computational tools and CLIR tools for minority languages and endangered languages which are often available only for powerful or prosperous majority languages.
Current status
In 2009, EXCLAIM was in a beta state, with varying degrees of functionality for different languages. Support for CLIR using the Wikipedia dataset and the most current version of EXCLAIM (v.0.5), including full UTF-8 support and Porter stemming for the English component, was available for the following twenty-three languages:
Support using the Wikipedia dataset and an earlier version of EXCLAIM (v.0.3) is available for the following languages:
Significant developments in the most recent version of EXCLAIM include support for Mandarin Chinese. By developing support f |
https://en.wikipedia.org/wiki/Asilomar%20Conference%20on%20Recombinant%20DNA | The Asilomar Conference on Recombinant DNA was an influential conference organized by Paul Berg, Maxine Singer, and colleagues to discuss the potential biohazards and regulation of biotechnology, held in February 1975 at a conference center at Asilomar State Beach, California. A group of about 140 professionals (primarily biologists, but also including lawyers and physicians) participated in the conference to draw up voluntary guidelines to ensure the safety of recombinant DNA technology. The conference also placed scientific research more into the public domain, and can be seen as applying a version of the precautionary principle.
The effects of these guidelines are still being felt through the biotechnology industry and the participation of the general public in scientific discourse. Due to potential safety hazards, scientists worldwide had halted experiments using recombinant DNA technology, which entailed combining DNAs from different organisms. After the establishment of the guidelines during the conference, scientists continued with their research, which increased fundamental knowledge about biology and the public's interest in biomedical research.
Background: recombinant DNA technology
Recombinant DNA technology arose as a result of advances in biology that began in the 1950s and '60s. During these decades, a tradition of merging the structural, biochemical and informational approaches to the central problems of classical genetics became more apparent. Two main underlying concepts of this tradition were that genes consisted of DNA and that DNA encoded information that determined the processes of replication and protein synthesis. These concepts were embodied in the model of DNA produced through the combined efforts of James Watson, Francis Crick, and Rosalind Franklin. Further research on the Watson-Crick model yielded theoretical advances that were reflected in new capacities to manipulate DNA. One of these capacities was recombinant DNA technology.
Exper |
https://en.wikipedia.org/wiki/TO-3 | In electronics, TO-3 is a designation for a standardized metal semiconductor package used for power semiconductors, including transistors, silicon controlled rectifiers, and, integrated circuits. TO stands for "Transistor Outline" and relates to a series of technical drawings produced by JEDEC.
The TO-3 case has a flat surface which can be attached to a heatsink, normally via a thermally conductive but electrically insulating washer. The design originated at Motorola around 1955 from a group headed by Dr. Virgil E. Bottom. who was director of research of the Motorola Semiconductor Division. The first use of this design was for the germanium alloy-junction power transistor 2N176 – the first power transistor to be put into quantity production. The lead spacing was originally intended to allow plugging the device into a then-common tube socket.
Typical applications
The metal package can be attached to a heat sink, making it suitable for devices dissipating several watts of heat. Thermal compound is used to improve heat transfer between the device case and the heat sink. Since the device case is one of the electrical connections, an insulator may be required to electrically isolate the component from the heatsink. Insulating washers may be made of mica or other materials with good thermal conductivity.
The case is used with high-power and high-current devices, on the order of a few tens of amperes current and up to a hundred watts of heat dissipation. The case surfaces are metal for good heat conductivity and durability. The metal-to-metal and metal-to-glass joints provide hermetic seals that protect the semiconductor from liquids and gases.
Compared with equivalent plastic packages, the TO-3 is more costly. The spacing and dimensions of the case leads make it unsuitable for higher frequency (radio frequency) devices.
Construction
The semiconductor die component is mounted on a raised platform on a metal plate, with the metal can welded on top of it; providin |
https://en.wikipedia.org/wiki/Line%E2%80%93line%20intersection | In Euclidean geometry, the intersection of a line and a line can be the empty set, a point, or another line. Distinguishing these cases and finding the intersection have uses, for example, in computer graphics, motion planning, and collision detection.
In three-dimensional Euclidean geometry, if two lines are not in the same plane, they have no point of intersection and are called skew lines. If they are in the same plane, however, there are three possibilities: if they coincide (are not distinct lines), they have an infinitude of points in common (namely all of the points on either of them); if they are distinct but have the same slope, they are said to be parallel and have no points in common; otherwise, they have a single point of intersection.
The distinguishing features of non-Euclidean geometry are the number and locations of possible intersections between two lines and the number of possible lines with no intersections (parallel lines) with a given line.
Formulas
A necessary condition for two lines to intersect is that they are in the same plane—that is, are not skew lines. Satisfaction of this condition is equivalent to the tetrahedron with vertices at two of the points on one line and two of the points on the other line being degenerate in the sense of having zero volume. For the algebraic form of this condition, see .
Given two points on each line
First we consider the intersection of two lines and in two-dimensional space, with line being defined by two distinct points and , and line being defined by two distinct points and .
The intersection of line and can be defined using determinants.
The determinants can be written out as:
When the two lines are parallel or coincident, the denominator is zero.
Given two points on each line segment
The intersection point above is for the infinitely long lines defined by the points, rather than the line segments between the points, and can produce an intersection point not contained in either o |
https://en.wikipedia.org/wiki/Leucotome | A leucotome or McKenzie leucotome is a surgical instrument used for performing leucotomies (also known as lobotomy) and other forms of psychosurgery.
Invented by Canadian neurosurgeon Dr. Kenneth G. McKenzie in the 1940s, the leucotome has a narrow shaft which is inserted into the brain through a hole in the skull, and then a plunger on the back of the leucotome is depressed to extend a wire loop or metal strip into the brain. The leucotome is then rotated, cutting a core of brain tissue. This type was used by the Nobel prize-winning Portuguese neurologist Egas Moniz.
Another, different, surgical instrument also called a leucotome was introduced by Walter Freeman for use in the transorbital lobotomy. Modeled after an ice-pick, it consisted simply of a pointed shaft. It was passed through the tear duct under the eyelid and against the top of the eyesocket. A mallet was used to drive the instrument through the thin layer of bone and into the brain along the plane of the bridge of the nose, to a depth of 5 cm. Due to incidents of breakage, a stronger but essentially identical instrument called an orbitoclast was later used.
Lobotomies were commonly performed from the 1930s to the 1960s, with a few as late as the 1980s in France.
See also
Orbitoclast
Lobotomy
Instruments used in general surgery
Notes
External links
A leucotome from the University of Manchester Medical School Museum
The Nobel Foundation page on prefrontal leukotomy
Neurosurgery
History of neuroscience
Surgical instruments
Lobotomy |
https://en.wikipedia.org/wiki/Arrested%20development | The term "arrested development" has had multiple meanings for over 200 years. In the field of medicine, the term "arrested development" was first used, circa 1835–1836, to mean a stoppage of physical development; the term continues to be used in the same way. In literature, Ernest Hemingway used the term in The Sun Also Rises, published in 1926: On page 51, Harvey tells Cohn, "I misjudged you [...] You're not a moron. You're only a case of arrested development."
In contrast, the UK's Mental Health Act 1983 used the term "arrested development" to characterize a form of mental disorder comprising severe mental impairment, resulting in a lack of intelligence. However, some researchers have objected to the notion that mental development can be "arrested" or stopped, preferring to consider mental status as developing in other ways in psychological terminology. Consequently, the term "arrested development" is no longer used when referring to a developmental disorder in mental health.
In anthropology and archaeology, the term "arrested development" means that a plateau of development in some sphere has been reached. Often it is a technological plateau such as the development of high temperature ceramics, but without glaze because of a lack of materials, or copper smelting without development of bronze because of a lack of tin. Arrested development is key in the insight of self-domestication in the evolution of hominidae where it involves being in an environment that favors reduction in aggression, including interspecific and intraspecific antagonism, for survival, in favor of attitudes that favor living together in a group, social behavior, traits that favor the group as a whole to come to the front stage, elimination of bullies - individuals with an antisocial personality disorder. |
https://en.wikipedia.org/wiki/100%20Gigabit%20Ethernet | 40 Gigabit Ethernet (40GbE) and 100 Gigabit Ethernet (100GbE) are groups of computer networking technologies for transmitting Ethernet frames at rates of 40 and 100 gigabits per second (Gbit/s), respectively. These technologies offer significantly higher speeds than 10 Gigabit Ethernet. The technology was first defined by the IEEE 802.3ba-2010 standard and later by the 802.3bg-2011, 802.3bj-2014, 802.3bm-2015, and 802.3cd-2018 standards. The first succeeding Terabit Ethernet specifications were approved in 2017.
The standards define numerous port types with different optical and electrical interfaces and different numbers of optical fiber strands per port. Short distances (e.g. 7 m) over twinaxial cable are supported while standards for fiber reach up to 80 km.
Standards development
On July 18, 2006, a call for interest for a High Speed Study Group (HSSG) to investigate new standards for high speed Ethernet was held at the IEEE 802.3 plenary meeting in San Diego.
The first 802.3 HSSG study group meeting was held in September 2006. In June 2007, a trade group called "Road to 100G" was formed after the NXTcomm trade show in Chicago.
On December 5, 2007, the Project Authorization Request (PAR) for the P802.3ba 40 Gbit/s and 100 Gbit/s Ethernet Task Force was approved with the following project scope:
The purpose of this project is to extend the 802.3 protocol to operating speeds of 40 Gbit/s and 100 Gbit/s in order to provide a significant increase in bandwidth while maintaining maximum compatibility with the installed base of 802.3 interfaces, previous investment in research and development, and principles of network operation and management. The project is to provide for the interconnection of equipment satisfying the distance requirements of the intended applications.
The 802.3ba task force met for the first time in January 2008. This standard was approved at the June 2010 IEEE Standards Board meeting under the name IEEE Std 802.3ba-2010.
The first 40 Gbit/s |
https://en.wikipedia.org/wiki/Service%20system | A service system (or customer service system, CSS) is a configuration of technology and organizational networks designed to deliver services that satisfy the needs, wants, or aspirations of customers. "Service system" is a term used in the service management, service operations, services marketing, service engineering, and service design literature. While the term frequently appears, it is rarely defined.
One definition of a service system is a value coproduction configuration of people, technology, internal and external service systems connected via value propositions, and shared information (language, laws, measures, etc.). The smallest service system is a single person and the largest service system is the world economy. The external service system of the global economy is considered to be ecosystem services. Service systems can be characterized by the value that results from interaction between service systems, whether the interactions are between people, businesses, or nations. Most service system interactions aspire to be win-win, non-coercive, and non-intrusive. However, some service systems may perform coercive service activities. For example, agents of the state may use coercion in accordance with laws of the land.
Another definition for service system states that a service system consists of elements (e.g., people, facilities, tools, and computer programs) that have a structure (i.e., an organization), a behavior (possibly described as a business process), and a purpose (or goal). A service system worldview is a system of systems that interact via value propositions.
A much simpler and more limited definition is that a service system is a work system that produces services. A work system is a system in which human participants and/or machines perform work (processes and activities) using information, technology, and other resources to produce products/services for internal or external customers. Co-production occurs in work systems in which custome |
https://en.wikipedia.org/wiki/Tarjan%27s%20strongly%20connected%20components%20algorithm | Tarjan's strongly connected components algorithm is an algorithm in graph theory for finding the strongly connected components (SCCs) of a directed graph. It runs in linear time, matching the time bound for alternative methods including Kosaraju's algorithm and the path-based strong component algorithm. The algorithm is named for its inventor, Robert Tarjan.
Overview
The algorithm takes a directed graph as input, and produces a partition of the graph's vertices into the graph's strongly connected components. Each vertex of the graph appears in exactly one of the strongly connected components. Any vertex that is not on a directed cycle forms a strongly connected component all by itself: for example, a vertex whose in-degree or out-degree is 0, or any vertex of an acyclic graph.
The basic idea of the algorithm is this: a depth-first search (DFS) begins from an arbitrary start node (and subsequent depth-first searches are conducted on any nodes that have not yet been found). As usual with depth-first search, the search visits every node of the graph exactly once, declining to revisit any node that has already been visited. Thus, the collection of search trees is a spanning forest of the graph. The strongly connected components will be recovered as certain subtrees of this forest. The roots of these subtrees are called the "roots" of the strongly connected components. Any node of a strongly connected component might serve as a root, if it happens to be the first node of a component that is discovered by search.
Stack invariant
Nodes are placed on a stack in the order in which they are visited. When the depth-first search recursively visits a node v and its descendants, those nodes are not all necessarily popped from the stack when this recursive call returns. The crucial invariant property is that a node remains on the stack after it has been visited if and only if there exists a path in the input graph from it to some node earlier on the stack. In other w |
https://en.wikipedia.org/wiki/NHL%20Network%20%281975%20TV%20program%29 | The NHL Network was an American television syndication package that broadcast National Hockey League games from the through seasons. The NHL Network was distributed by the Hughes Television Network.
Conception
After being dropped by NBC after the season, the NHL had no national television contract in the United States. In response to this, the league put together a network of independent stations covering approximately 55% of the country.
Coverage summary
Games typically aired on Monday nights (beginning at 8 p.m. ET) or Saturday afternoons. The package was offered to local stations with no rights fee. Profits would be derived from the advertising, which was about evenly split between the network and the local station. The Monday night games were often billed as The NHL Game of the Week. Viewers in New York City, Buffalo, St. Louis, Pittsburgh, Detroit and Los Angeles got the Game of the Week on a different channel than their local team's games. Therefore, whenever a team had a “home” game, the NHL Network aired the home team's broadcast rather than their own.
Initially, the Monday night package was marketed to ABC affiliates, the idea being that ABC carried Monday-night NFL football in the fall and (starting in May ) Monday-night Major League Baseball in the spring and summer; as such, stations would want hockey to create a year-round Monday night sports block. But very few ABC stations picked up the package.
During the season, the NHL Network showed selected games from the NHL Super Series (the big one in that package was Red Army at Philadelphia, but the package did not include Red Army at Montreal on New Year's Eve 1975, which was seen only on CBC) as well as some playoff games. During the season, the NHL Network showed 12 regular season games on Monday nights plus the All-Star Game. By (the final season of the NHL Network's existence), there were 18 Monday night games and 12 Saturday afternoon games covered.
The 1979 Challenge Cup replaced the All-St |
https://en.wikipedia.org/wiki/NHL%20on%20SportsChannel%20America | The NHL on SportsChannel America was the presentation of National Hockey League broadcasts on the now defunct SportsChannel America cable television network.
Terms of the deal
Taking over for ESPN, SportsChannel's contract paid US$51 million ($17 million per year) over three years, more than double what ESPN had paid ($24 million) for the previous three years SportsChannel America managed to get a fourth NHL season for just $5 million.
The SportsChannel America deal was in a sense, a power play created by Charles Dolan and Bill Wirtz. Dolan was still several years away from getting control of Madison Square Garden, and Wirtz owned 25% of SportsChannel Chicago. NHL president John Ziegler convinced the board of governors that SportsChannel America was a better alternative than a proposed NHL Channel backed by Paramount and Viacom that had interests in the MSG Network and NESN.
SportsChannel's availability
Unfortunately, SportsChannel America was only available in a few major markets (notably absent though were Detroit, Pittsburgh, and St. Louis) and reached only a 1/3 of the households that ESPN did at the time. SportsChannel America was seen in fewer than 10 million households. In comparison, by the 1991–92 season, ESPN was available in 60.5 million homes whereas SportsChannel America was available in only 25 million. As a matter of fact, in the first year of the deal (), SportsChannel America was available in only 7 million homes when compared to ESPN's reach of 50 million. When the SportsChannel deal ended in 1992, the league returned to ESPN for another contract that would pay US$80 million over five years.
SportsChannel America took advantage of using their regional sports networks' feed of a game, graphics and all, instead of producing a show from the ground up, most of the time. Distribution of SportsChannel America across the country was limited to cities that had a SportsChannel regional sports network or affiliate. Very few cable systems in non-NHL terri |
https://en.wikipedia.org/wiki/Alpha-particle%20spectroscopy | Alpha spectrometry (also known as alpha(-particle) spectroscopy) is the quantitative study of the energy of alpha particles emitted by a radioactive nuclide that is an alpha emitter.
As emitted alpha particles are mono-energetic (i.e. not emitted with a spectrum of energies, such as beta decay) with energies often distinct to the decay they can be used to identify which radionuclide they originated from.
Experimental methods
Counting with a source deposited onto a metal disk
It is common to place a drop of the test solution on a metal disk which is then dried out to give a uniform coating on the disk. This is then used as the test sample. If the thickness of the layer formed on the disk is too thick then the lines of the spectrum are broadened to lower energies. This is because some of the energy of the alpha particles is lost during their movement through the layer of active material.
Liquid scintillation
An alternative method is to use liquid scintillation counting (LSC), where the sample is directly mixed with a scintillation cocktail. When the individual light emission events are counted, the LSC instrument records the amount of light energy per radioactive decay event. The alpha spectra obtained by liquid scintillation counting are broaden because of the two main intrinsic limitations of the LSC method: (1) because the random quenching reduces the number of photons emitted per radioactive decay, and (2) because the emitted photons can be absorbed by cloudy or coloured samples (Lambert-Beer law). The liquid scintillation spectra are subject to Gaussian broadening, rather than to the distortion caused by the absorption of alpha-particles by the sample when the layer of active material deposited onto a disk is too thick.
Alpha spectra
From left to right the peaks are due to 209Po, 239Pu, 210Po and 241Am. The fact that isotopes such as 239Pu and 241Am have more than one alpha line indicates that the (daughter) nucleus can be in different discrete energy lev |
https://en.wikipedia.org/wiki/Theaflavin | Theaflavin (TF) and its derivatives, known collectively as theaflavins, are antioxidant polyphenols that are formed from the condensation of flavan-3-ols in tea leaves during the enzymatic oxidation (sometimes erroneously referred to as fermentation) of black tea. Theaflavin-3-gallate, theaflavin-3'-gallate, and theaflavin-3-3'-digallate are the main theaflavins. Theaflavins are types of thearubigins, and are therefore reddish in color.
See also
Theaflavin 3-gallate |
https://en.wikipedia.org/wiki/Dortmund%20Data%20Bank | The Dortmund Data Bank (short DDB) is a factual data bank for thermodynamic and thermophysical data. Its main usage is the data supply for process simulation where experimental data are the basis for the design, analysis, synthesis, and optimization of chemical processes. The DDB is used for fitting parameters for thermodynamic models like NRTL or UNIQUAC and for many different equations describing pure component properties, e.g., the Antoine equation for vapor pressures. The DDB is also used for the development and revision of predictive methods like UNIFAC and PSRK.
Contents
Mixture properties
Phase equilibria data (vapor–liquid, liquid–liquid, solid–liquid), data on azeotropy and zeotropy
Mixing enthalpies
Gas solubilities
Activity coefficients at infinite dilution
Heat capacities and excess heat capacities
Volumes, densities, and excess volumes (volume effect of mixing)
Salt solubilities
Octanol-water partition coefficients
Critical data
The mixture data banks contain () approx. 308,000 data sets with 2,157,000 data points for 10,750 components building 84,870 different binary, ternary, and higher systems/combinations.
Pure component properties
Saturated vapor pressures
Saturated densities
Viscosities
Thermal conductivities
Critical data (Tc, Pc, Vc)
Triple points
Melting points
Heat capacities
Heats of fusion, sublimation and vaporization
Heats of formation and combustion
Heats and temperatures of transitions for solids
Speed of sound
P-v-T data including virial coefficients
Energy functions
Enthalpies and entropies
Surface tensions
The pure component properties data bank contains () approx. 157,000 data sets with 1,080,000 data points for 16,700 different components.
Data sources
The DDB is a collection of experimental data published by the original authors. All data are referenced and a quite large literature data bank is part of the DDB, currently containing more than 92,000 articles, books, private communications, deposit |
https://en.wikipedia.org/wiki/The%20Cham-Cham | "The Cham-Cham" is the 25th episode of Thunderbirds, a British Supermarionation television series created by Gerry and Sylvia Anderson and filmed by their production company AP Films (APF). The penultimate episode of Thunderbirds Series One, it was written and directed by Alan Pattillo and first broadcast on 24 March 1966 on ATV Midlands.
Set in the 2060s, Thunderbirds follows the exploits of International Rescue, an organisation that uses technologically advanced rescue vehicles to save human life. The main characters are ex-astronaut Jeff Tracy, founder of International Rescue, and his five adult sons, who pilot the organisation's main vehicles: the Thunderbird machines. "The Cham-Cham" opens with a United States Air Force plane being shot down during a radio broadcast of the instrumental "Dangerous Game" by popular musical group the Cass Carnaby Five. International Rescue suspect sabotage, and Lady Penelope, Tin-Tin and Parker travel to the Swiss Alps to investigate the band's current tour venue, the mountain resort Paradise Peaks. There, they discover that the RTL2 attacks are being co-ordinated with the aid of an advanced computer called a "Cham-Cham".
Filmed in late 1965, "The Cham-Cham" has a show business theme and was written in the style of classic Hollywood musicals. It features several innovations in APF's use of marionette puppets. One scene features the Penelope character performing a slow dance, which was a challenge to film due to the difficulty in moving Supermarionation puppets convincingly. "The Cham-Cham" is also the first episode of any Supermarionation series to show characters skiing. "Dangerous Game", the focus of the episode's soundtrack, was devised as a Latin rhythm by series composer Barry Gray. Singer Ken Barrie recorded a lyrical version but this is not heard in the finished episode.
"The Cham-Cham" has been well received by commentators, drawing particular praise for its production design and soundtrack. Sylvia Anderson considered t |
https://en.wikipedia.org/wiki/Parametric%20array | A parametric array, in the field of acoustics, is a nonlinear transduction mechanism that generates narrow, nearly side lobe-free beams of low frequency sound, through the mixing and interaction of high frequency sound waves, effectively overcoming the diffraction limit (a kind of spatial 'uncertainty principle') associated with linear acoustics. The main side lobe-free beam of low frequency sound is created as a result of nonlinear mixing of two high frequency sound beams at their difference frequency. Parametric arrays can be formed in water, air, and earth materials/rock.
History
Priority for discovery and explanation of the parametric array owes to Peter J. Westervelt, winner of the Lord Rayleigh Medal (currently Professor Emeritus at Brown University), although important experimental work was contemporaneously underway in the former Soviet Union.
According to Muir and Albers, the concept for the parametric array occurred to Dr. Westervelt while he was stationed at the London, England, branch office of the Office of Naval Research in 1951.
According to Albers, he (Westervelt) there first observed an accidental generation of low frequency sound in air by Captain H.J. Round (British pioneer of the superheterodyne receiver) via the parametric array mechanism.
The phenomenon of the parametric array, seen first experimentally by Westervelt in the 1950s, was later explained theoretically in 1960, at a meeting of the Acoustical Society of America. A few years after this, a full paper was published as an extension of Westervelt's classic work on the nonlinear Scattering of Sound by Sound.
Foundations
The foundation for Westervelt's theory of sound generation and scattering in nonlinear acoustic media owes to an application of Lighthill's equation for fluid particle motion.
The application of Lighthill’s theory to the nonlinear acoustic realm yields the Westervelt–Lighthill Equation (WLE). Solutions to this equation have been developed using Green's functions and |
https://en.wikipedia.org/wiki/Shakuhachi%20musical%20notation | Shakuhachi musical notation is a traditional tablature-style method of transcribing shakuhachi music.
A number of systems exist for notating shakuhachi music, most of which are based on the rotsure (ロツレ) and the fuho-u (フホウ) systems.
Traditional solo shakuhachi music (honkyoku) is transmitted as a semi-oral tradition; notation is often used as a mnemonic device. However, the master-disciple relationship is given emphasis within the tradition, and written sources are considered of little value 'without experience of the living tradition of actual training within the school'. In contrast to Western staff notation, shakuhachi playing instructions commonly indicate multiple fingerings resulting in various timbres for a given pitch, and microtonal slides between semitones.
Solo Kinko school honkyoku ("original pieces") generally do not feature an explicit beat. In some notation systems, nominal rhythmic values are given; musical importance ascribed to rhythmic markings varies depending on the lineage and/or teacher.
Staff notation and graphic notation are sometimes used to notate music for shakuhachi, usually in modern music when shakuhachi is used in conjunction with Western musical instruments.
Some current publishers of traditional shakuhachi honkyoku notation include the Chikuyūsha, Chikumeisha, Chikuhoryū, and the Kokusai Shakuhachi Kenshūkan. |
https://en.wikipedia.org/wiki/Basophil%20cell | An anterior pituitary basophil is a type of cell in the anterior pituitary which manufactures hormones.
It is called a basophil because it is basophilic (readily takes up bases), and typically stains a relatively deep blue or purple.
These basophils are further classified by the hormones they produce. (It is usually not possible to distinguish between these cell types using standard staining techniques.)
*Produced only in pregnancy by the developing embryo.
See also
Chromophobe cell
Melanotroph
Chromophil
Acidophil cell
Oxyphil cell
Oxyphil cell (parathyroid)
Pituitary gland
Neuroendocrine cell
Basophilic |
https://en.wikipedia.org/wiki/List%20of%204000-series%20integrated%20circuits | The following is a list of CMOS 4000-series digital logic integrated circuits. In 1968, the original 4000-series was introduced by RCA. Although more recent parts are considerably faster, the 4000 devices operate over a wide power supply range (3V to 18V recommended range for "B" series) and are well suited to unregulated battery powered applications and interfacing with sensitive analogue electronics, where the slower operation may be an EMC advantage. The earlier datasheets included the internal schematics of the gate architectures and a number of novel designs are able to 'mis-use' this additional information to provide semi-analog functions for timing skew and linear signal amplification. Due to the popularity of these parts, other manufacturers released pin-to-pin compatible logic devices and kept the 4000 sequence number as an aid to identification of compatible parts. However, other manufacturers use different prefixes and suffixes on their part numbers, and not all devices are available from all sources or in all package sizes.
Overview
Non-exhaustive list of manufacturers which make or have made these kind of ICs.
Current manufacturers of these ICs:
Nexperia (spinoff from NXP)
ON Semiconductor (acquired Motorola & Fairchild Semiconductor)
Texas Instruments (acquired National Semiconductor)
Former manufacturers of these ICs:
Hitachi
NXP (acquired Philips Semiconductors)
RCA (defunct; first introduced this 4000-series family in 1968)
Renesas Electronics (acquired Intersil)
ST Microelectronics
Toshiba Semiconductor
VEB Kombinat Mikroelektronik (defunct; was active in the 1980s)
Tesla Piešťany, s.p. (defunct; was active in the 1980s and 1990s)
various manufacturers in the former Soviet Union (e.g. Angstrem, Mikron Group, Exiton, Splav, NZPP in Russia; Mezon in Moldavia; Integral in Byelorussia; Oktyabr in Ukraine; Billur in Azerbaijan)
Logic gates
Since there are numerous 4000-series parts, this section groups related combinational logic pa |
https://en.wikipedia.org/wiki/Glioblast | A glioblast is a type of cell derived from neuroectoderm and with the ability to differentiate into several different types of neuroglia.
It comes from a precursor (spongioblast). However, the latter may also differentiate into an ependymoblast.
Glioblasts differentiate into astrocytes and oligodendrocytes. Its tumor is called a glioblastoma, and is the most common type of central nervous system malignancy.
See also
Glioblastoma multiforme
List of human cell types derived from the germ layers |
https://en.wikipedia.org/wiki/Herring%20bodies | Herring bodies or neurosecretory bodies are structures found in the posterior pituitary. They represent the terminal end of the axons from the hypothalamus, and hormones are temporarily stored in these locations. They are neurosecretory terminals.
Antidiuretic hormone (ADH) and oxytocin are both stored in Herring bodies, but are not stored simultaneously in the same Herring body.
In addition, each Herring body also contains ATP and a type of neurophysin. Neurophysins are binding proteins, of which there are two types: neurophysin I and neurophysin II, which bind to oxytocin and ADH, respectively. Neurophysin and its hormone become a complex considered a single protein and stored in the neurohypophysis. Upon stimulation by the hypothalamus, secretory granules release stored hormones into the bloodstream. Fibers from supraoptic nuclei are concerned with ADH secretion; paraventricular nuclei with oxytocin.
This anatomical structure was first described by Percy Theodore Herring in 1908. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.