source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Catch%20connective%20tissue | Catch connective tissue (also called mutable collagenous tissue) is a kind of connective tissue found in echinoderms (such as starfish and sea cucumbers) which can change its mechanical properties in a few seconds or minutes through nervous control rather than by muscular means.
Connective tissue, including dermis, tendons and ligaments, is one of four main animal tissues. Usual connective tissue does not change its stiffness except in the slow process of aging. Catch connective tissue, however, shows rapid, large and reversible stiffness changes in response to stimulation under nervous control. This connective tissue is specific to echinoderms in which it works in posture maintenance and mechanical defense with low energy expenditure, and in body fission and autotomy. The stiffness changes of this tissue are due to the changes in the stiffness of extracellular materials. The small amount of muscle cells that are sometimes found scattered in this tissue has little influence on the stiffness-change mechanisms.
Tissue distribution
Catch connective tissue is found in all the extant classes of echinoderms.
Sea lilies and feather stars: ligaments connecting ossicles of arms, stalks and cirri.
Starfish: body-wall dermis; walls of tube feet.
Brittle stars: intervertebral ligaments; autotomy tendons of arm muscles.
Sea urchins: ligaments or catch apparatus, connecting spines to tests of sea urchins; tooth ligaments; compass depressor "muscles", which are in fact mostly made of connective tissues.
Sea cucumbers: body-wall dermis.
Early echinoderms were sessile organisms that fed on suspended particles carried by water currents. Their body was covered with imbricate small skeletal plates. The arrangement of plates suggests that plates worked as sliding joints so as animals to be able to change their body shape: they could possibly take an extended feeding posture and a flat "hiding" posture. The body plates might be connected with catch connective tissue that allowed early |
https://en.wikipedia.org/wiki/Topcolor | Topcolor is a model in theoretical physics, of dynamical electroweak symmetry breaking in which the top quark and anti-top quark form a composite Higgs boson by a new force arising from massive "top gluons". The solution to composite Higgs models was actually anticipated in 1981, and found to be the Infrared fixed point for the top quark mass.
Analogy with known physics
The composite Higgs boson made from a bound pair of top-anti-top quarks is analogous to the phenomenon of superconductivity, where Cooper pairs are formed by the exchange of phonons. The pairing dynamics and its solution was treated in the Bardeen-Hill-Lindner model.
The original topcolor naturally involved an extension of the standard model color gauge group to a product group SU(3)×SU(3)×SU(3)×... One of the gauge groups contains
the top and bottom quarks, and has a sufficiently large coupling constant to cause the condensate to form. The topcolor model anticipates the idea of dimensional deconstruction and extra space dimensions, as well as the large mass of the top quark.
In 2019 this was revisited ("scalar democracy") in which many composite Higgs bosons may form at very high energies, composed of the known quarks and leptons, perhaps bound by universal force (e.g., gravity, or an extension of topcolor). The standard model Higgs boson is then a top-anti-top boundstate. The theory predicts many new Higgs doublets, starting at the TeV mass scale, with couplings to the known fermions, that may explain their masses and mixing angles. The first sequential new Higgs bosons should be accessible to the LHC.
See also
Fermion condensate
Technicolor (physics)
Hierarchy problem
Top quark condensate |
https://en.wikipedia.org/wiki/Doubling%20space | In mathematics, a metric space with metric is said to be doubling if there is some doubling constant such that for any and , it is possible to cover the ball with the union of at most balls of radius . The base-2 logarithm of is called the doubling dimension of . Euclidean spaces equipped with the usual Euclidean metric are examples of doubling spaces where the doubling constant depends on the dimension . For example, in one dimension, ; and in two dimensions, . In general, Euclidean space has doubling dimension .
Assouad's embedding theorem
An important question in metric space geometry is to characterize those metric spaces that can be embedded in some Euclidean space by a bi-Lipschitz function. This means that one can essentially think of the metric space as a subset of Euclidean space. Not all metric spaces may be embedded in Euclidean space. Doubling metric spaces, on the other hand, would seem like they have more of a chance, since the doubling condition says, in a way, that the metric space is not infinite dimensional. However, this is still not the case in general. The Heisenberg group with its Carnot-Caratheodory metric is an example of a doubling metric space which cannot be embedded in any Euclidean space.
Assouad's Theorem states that, for a M-doubling metric space X, if we give it the metric d(x, y)ε for some 0 < ε < 1, then there is a L-bi-Lipschitz map f:X → ℝd, where d and L depend on M and ε.
Doubling Measures
Definition
A nontrivial measure on a metric space X is said to be doubling if the measure of any ball is finite and approximately the measure of its double, or more precisely, if there is a constant C > 0 such that
for all x in X and r > 0. In this case, we say μ is C-doubling. In fact, it can be proved that, necessarily, C 2.
A metric measure space that supports a doubling measure is necessarily a doubling metric space, where the doubling constant depends on the constant C. Conversely, every complete doubling metric space |
https://en.wikipedia.org/wiki/Fornix%20%28neuroanatomy%29 | The fornix (from ; : fornices) is a C-shaped bundle of nerve fibers in the brain that acts as the major output tract of the hippocampus. The fornix also carries some afferent fibers to the hippocampus from structures in the diencephalon and basal forebrain. The fornix is part of the limbic system. While its exact function and importance in the physiology of the brain are still not entirely clear, it has been demonstrated in humans that surgical transection—the cutting of the fornix along its body—can cause memory loss. There is some debate over what type of memory is affected by this damage, but it has been found to most closely correlate with recall memory rather than recognition memory. This means that damage to the fornix can cause difficulty in recalling long-term information such as details of past events, but it has little effect on the ability to recognize objects or familiar situations.
Structure
The fibers begin in the hippocampus on each side of the brain as fimbriae; the separate left and right sides are each called the crus of the fornix (plural crura). The bundles of fibers come together in the midline of the brain, forming the body of the fornix. The lower edge of the septum pellucidum (the membrane that separates the lateral ventricles) is attached to the upper face of the fornix body.
The body of the fornix travels anteriorly and divides again near the anterior commissure. The left and right parts separate, but there is also an anterior/posterior divergence.
The posterior fibers (called the postcommissural fornix) of each side continue through the hypothalamus to the mammillary bodies; then to the anterior nuclei of thalamus.
The anterior fibers (precommissural fornix) end at the septal nuclei of the basal forebrain and nucleus accumbens of each half of the brain.
Commissure
The lateral portions of the body of the fornix are joined by a thin triangular lamina, named the psalterium (lyra). This lamina contains some commissural fibers that conn |
https://en.wikipedia.org/wiki/Lattice%20constant | A lattice constant or lattice parameter is one of the physical dimensions and angles that determine the geometry of the unit cells in a crystal lattice, and is proportional to the distance between atoms in the crystal. A simple cubic crystal has only one lattice constant, the distance between atoms, but in general lattices in three dimensions have six lattice constants: the lengths a, b, and c of the three cell edges meeting at a vertex, and the angles α, β, and γ between those edges.
The crystal lattice parameters a, b, and c have the dimension of length. The three numbers represent the size of the unit cell, that is, the distance from a given atom to an identical atom in the same position and orientation in a neighboring cell (except for very simple crystal structures, this will not necessarily be distance to the nearest neighbor). Their SI unit is the meter, and they are traditionally specified in angstroms (Å); an angstrom being 0.1 nanometer (nm), or 100 picometres (pm). Typical values start at a few angstroms. The angles α, β, and γ are usually specified in degrees.
Introduction
A chemical substance in the solid state may form crystals in which the atoms, molecules, or ions are arranged in space according to one of a small finite number of possible crystal systems (lattice types), each with fairly well defined set of lattice parameters that are characteristic of the substance. These parameters typically depend on the temperature, pressure (or, more generally, the local state of mechanical stress within the crystal), electric and magnetic fields, and its isotopic composition. The lattice is usually distorted near impurities, crystal defects, and the crystal's surface. Parameter values quoted in manuals should specify those environment variables, and are usually averages affected by measurement errors.
Depending on the crystal system, some or all of the lengths may be equal, and some of the angles may have fixed values. In those systems, only some of t |
https://en.wikipedia.org/wiki/Universal%20differential%20equation | A universal differential equation (UDE) is a non-trivial differential algebraic equation with the property that its solutions can approximate any continuous function on any interval of the real line to any desired level of accuracy.
Precisely, a (possibly implicit) differential equation is a UDE if for any continuous real-valued function and for any positive continuous function there exist a smooth solution of with for all .
The existence of an UDE has been initially regarded as an analogue of the universal Turing machine for analog computers, because of a result of Shannon that identifies the outputs of the general purpose analog computer with the solutions of algebraic differential equations. However, in contrast to universal Turing machines, UDEs do not dictate the evolution of a system, but rather sets out certain conditions that any evolution must fulfill.
Examples
Rubel found the first known UDE in 1981. It is given by the following implicit differential equation of fourth-order:
Duffin obtained a family of UDEs given by:
and , whose solutions are of class for n > 3.
Briggs proposed another family of UDEs whose construction is based on Jacobi elliptic functions:
, where n > 3.
Bournez and Pouly proved the existence of a fixed polynomial vector field p such that for any f and ε there exists some initial condition of the differential equation y' = p(y) that yields a unique and analytic solution satisfying |y(x) − f(x)| < ε(x) for all x in R.
See also
Zeta function universality
Hölder's theorem |
https://en.wikipedia.org/wiki/AICA%20ribonucleotide | 5-Aminoimidazole-4-carboxamide ribonucleotide (AICAR) is an intermediate in the generation of inosine monophosphate. AICAR is an analog of adenosine monophosphate (AMP) that is capable of stimulating AMP-dependent protein kinase (AMPK) activity. The drug has also been shown as a potential treatment for diabetes by increasing the metabolic activity of tissues by changing the physical composition of muscle.
Mechanism of action
The nucleoside form of AICAR, acadesine, is an analog of adenosine that enters cardiac cells to inhibit adenosine kinase and adenosine deaminase. It enhances the rate of nucleotide re-synthesis increasing adenosine generation from adenosine monophosphate only during conditions of myocardial ischemia. In cardiac myocytes, acadesine is phosphorylated to AICAR to activate AMPK without changing the levels of the nucleotides. AICAR is able to enter the de novo synthesis pathway for adenosine synthesis to inhibit adenosine deaminase causing an increase in ATP levels and adenosine levels.
Use as a performance-enhancing drug
In 2009, the French Anti-Doping Agency, suspected that AICAR had been used in the 2009 Tour de France for its supposed performance enhancing properties. Although a detection method was reportedly given to the World Anti-Doping Agency, it was unknown if this method was implemented. As of January 2011, AICAR was officially a banned substance in the World Anti Doping Code, and the standard levels in elite athletes have been determined, to interpret test results.
See also
Inosine monophosphate synthase
Nicotinamide riboside |
https://en.wikipedia.org/wiki/Detached%20eddy%20simulation | Detached eddy simulation (DES) is a modification of a Reynolds-averaged Navier–Stokes equations (RANS) model in which the model switches to a subgrid scale formulation in regions fine enough for large eddy simulation (LES) calculations.
Details
Regions near solid boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore, the grid resolution is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. Though DES was initially formulated for the Spalart-Allmaras model, it can be implemented with other RANS models (Strelets, 2001), by appropriately modifying the length scale which is explicitly or implicitly involved in the RANS model. So while Spalart-Allmaras model based DES acts as LES with a wall model, DES based on other models (like two equation models) behave as a hybrid RANS-LES model. Grid generation is more complicated than for a simple RANS or LES case due to the RANS-LES switch. DES is a non-zonal approach and provides a single smooth velocity field across the RANS and the LES regions of the solution. |
https://en.wikipedia.org/wiki/American%20Society%20of%20Plant%20Taxonomists | The American Society of Plant Taxonomists (ASPT) is a botanical organization formed in 1935 to "foster, encourage, and promote education and research in the field of plant taxonomy, to include those areas and fields of study that contribute to and bear upon taxonomy and herbaria", according to its bylaws. It is incorporated in the state of Wyoming, and its office is at the University of Wyoming, Department of Botany.
The ASPT publishes a quarterly botanical journal, Systematic Botany, and the irregular series Systematic Botany Monographs. The society gives annual awards for excellence in Botany. The Society gives the Asa Gray Award for "outstanding accomplishments pertinent to the goals of the Society," and the Peter Raven Award to a botanist who has "made exceptional efforts at outreach to non-scientists."
Asa Gray Awardees
2021:Elizabeth Kellogg
2020:Jeff Doyle
2019:Lucinda McDade
2018:Vicki Funk
2017:Michael Donoghue
2016:Peter F. Stevens
2015:Warren Lambert Wagner
2014:Alan Smith
2013:Bruce Baldwin
2012:Noel and Patricia Holmgren
2011:Walter S. Judd
2010:Harold E. Robinson
2009:Alan Graham
2008:William R. Anderson
2007:Scott A. Mori
2006:Douglas E. Soltis and Pamela S. Soltis
2005:Grady Webster
2004:John Beaman
2003:Beryl B. Simpson
2002:Natalie Uhl
2001:Robert F. Thorne
2000:William T. Stearn
1999:Tod Stuessey
1998:Ghillean Prance
1997:Daniel J. Crawford
1996:Peter Raven
1995:Jerzy Rzedowski
1994:Hugh H. Iltis
1993:Sherwin Carlquist
1992:Albert Charles Smith
1991:Billie L. Turner
1990:Warren H. Wagner
1989:Rupert C. Barneby
1988:Charles B. Heiser
1987:Reed C. Rollins
1986:Lincoln Constance
1985:Arthur Cronquist
1984:Rogers McVaugh
Peter Raven Awardees
2021:Tanisha Williams
2020:Susan Pell
2019:Lena Struwe
2018:Chris Martine
2017:Hans Walter Lack
2016:Lynn G. Clark
2015:John Weirsema
2014:Ken Cameron
2013
2012
2011:Robbin C. Moran
2010:Barney Lipscomb
2009:Sandra Knapp
2008:W. Hardy Eshbaugh
2007:John T. Mickel
2006:Art Kruckeberg
2005:Alan W. Meerow
2004:Da |
https://en.wikipedia.org/wiki/Outline%20of%20life%20forms | The following outline is provided as an overview of and topical guide to life forms:
A life form (also spelled life-form or lifeform) is an entity that is living, such as plants (flora), animals (fauna), and fungi (funga). It is estimated that more than 99% of all species that ever existed on Earth, amounting to over five billion species, are extinct.
Earth is the only celestial body known to harbor life forms. No form of extraterrestrial life has been discovered yet.
Archaea
Archaea – a domain of single-celled microorganisms, morphologically similar to bacteria, but they possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably the enzymes involved in transcription and translation. Many archaea are extremophiles, which means living in harsh environments, such as hot springs and salt lakes, but they have since been found in a broad range of habitats.
Thermoproteota – a phylum of the Archaea kingdom. Initially
Thermoprotei
Sulfolobales – grow in terrestrial volcanic hot springs with optimum growth occurring
Euryarchaeota – In the taxonomy of microorganisms
Haloarchaea
Halobacteriales – in taxonomy, the Halobacteriales are an order of the Halobacteria, found in water saturated or nearly saturated with salt.
Methanobacteria
Methanobacteriales – information including symptoms, causes, diseases, symptoms, treatments, and other medical and health issues.
Methanococci
Methanococcales aka Methanocaldococcus jannaschii – thermophilic methanogenic archaea, meaning that it thrives at high temperatures and produces methane
Methanomicrobia
Methanosarcinales – In taxonomy, the Methanosarcinales are an order of the Methanomicrobia
Methanopyri
Methanopyrales – In taxonomy, the Methanopyrales are an order of the methanopyri.
Thermococci
Thermococcales
Thermoplasmata
Thermoplasmatales – An order of aerobic, thermophilic archaea, in the kingdom
Halophiles – organisms that thrive in high salt concentrations
Ko |
https://en.wikipedia.org/wiki/Genetic%20viability | Genetic viability is the ability of the genes present to allow a cell, organism or population to survive and reproduce. The term is generally used to mean the chance or ability of a population to avoid the problems of inbreeding. Less commonly genetic viability can also be used in respect to a single cell or on an individual level.
Inbreeding depletes heterozygosity of the genome, meaning there is a greater chance of identical alleles at a locus. When these alleles are non-beneficial, homozygosity could cause problems for genetic viability. These problems could include effects on the individual fitness (higher mortality, slower growth, more frequent developmental defects, reduced mating ability, lower fecundity, greater susceptibility to disease, lowered ability to withstand stress, reduced intra- and inter-specific competitive ability) or effects on the entire population fitness (depressed population growth rate, reduced regrowth ability, reduced ability to adapt to environmental change). See Inbreeding depression. When a population of plants or animals loses their genetic viability, their chance of going extinct increases.
Necessary conditions
To be genetically viable, a population of plants or animals requires a certain amount of genetic diversity and a certain population size. For long-term genetic viability, the population size should consist of enough breeding pairs to maintain genetic diversity. The precise effective population size can be calculated using a minimum viable population analysis. Higher genetic diversity and a larger population size will decrease the negative effects of genetic drift and inbreeding in a population. When adequate measures have been met, the genetic viability of a population will increase.
Causes for decrease
The main cause of a decrease in genetic viability is loss of habitat. This loss can occur because of, for example urbanization or deforestation causing habitat fragmentation. Natural events like earthquakes, floods |
https://en.wikipedia.org/wiki/Frank%20Wanlass | Frank Marion Wanlass (May 17, 1933 in Thatcher, AZ – September 9, 2010 in Santa Clara, California) was an American electrical engineer. He is best known for inventing CMOS (complementary MOS) logic with Chih-Tang Sah in 1963. CMOS has since become the standard semiconductor device fabrication process for MOSFETs (metal–oxide–semiconductor field-effect transistors).
Biography
He obtained his PhD from the University of Utah. Wanlass invented CMOS (complementary MOS) logic circuits with Chih-Tang Sah in 1963, while working at Fairchild Semiconductor. Wanlass was given U.S. patent #3,356,858 for "Low Stand-By Power Complementary Field Effect Circuitry" in 1967.
In 1963, while studying MOSFET (metal–oxide–semiconductor field-effect transistor) structures, he noted the movement of charge through oxide onto a gate. While he did not pursue it, this idea would later become the basis for EPROM (erasable programmable read-only memory) technology.
In 1964, Wanlass moved to General Microelectronics (GMe), where he made the first commercial MOS integrated circuits, and a year later to General Instrument Microelectronics Division in New York,
where he developed four-phase logic.
He was also remembered for his contribution to solving threshold voltage stability in MOS transistors due to sodium ion drift.
In 1991, Wanlass was awarded the IEEE Solid-State Circuits Award.
In 2009, on the 50th anniversary of both the MOSFET and the integrated circuit, Frank Wanlass was inducted into the National Inventors Hall of Fame for his invention of CMOS logic. He was part of the 2009 class celebrating semiconductor pioneers, along with inventors of semiconductor technologies such as the MOSFET (Mohamed Atalla and Dawon Kahng), planar process (Jean Hoerni), EPROM (Dov Frohman) and molecular beam epitaxy (Alfred Y. Cho).
Wanlass died on 9 September 2010. |
https://en.wikipedia.org/wiki/Serum%20protein%20electrophoresis | Serum protein electrophoresis (SPEP or SPE) is a laboratory test that examines specific proteins in the blood called globulins. The most common indications for a serum protein electrophoresis test are to diagnose or monitor multiple myeloma, a monoclonal gammopathy of uncertain significance (MGUS), or further investigate a discrepancy between a low albumin and a relatively high total protein. Unexplained bone pain, anemia, proteinuria, chronic kidney disease, and hypercalcemia are also signs of multiple myeloma, and indications for SPE. Blood must first be collected, usually into an airtight vial or syringe. Electrophoresis is a laboratory technique in which the blood serum (the fluid portion of the blood after the blood has clotted) is applied to either an acetate membrane soaked in a liquid buffer, or to a buffered agarose gel matrix, or into liquid in a capillary tube, and exposed to an electric current to separate the serum protein components into five major fractions by size and electrical charge: serum albumin, alpha-1 globulins, alpha-2 globulins, beta 1 and 2 globulins, and gamma globulins.
Acetate or gel electrophoresis
Proteins are separated by both electrical forces and electroendoosmostic forces. The net charge on a protein is based on the sum charge of its amino acids, and the pH of the buffer. Proteins are applied to a solid matrix such as an agarose gel, or a cellulose acetate membrane in a liquid buffer, and electric current is applied. Proteins with a negative charge will migrate towards the positively charged anode. Albumin has the most negative charge, and will migrate furthest towards the anode. Endoosmotic flow is the movement of liquid towards the cathode, which causes proteins with a weaker charge to move backwards from the application site. Gamma proteins are primarily separated by endoosmotic forces. The drawing of the electrophoretic bands provided by the laboratory may be difficult to remember, and medical students, residents, nur |
https://en.wikipedia.org/wiki/Loop%20quantum%20cosmology | Loop quantum cosmology (LQC) is a finite, symmetry-reduced model of loop quantum gravity (LQG) that predicts a "quantum bridge" between contracting and expanding cosmological branches.
The distinguishing feature of LQC is the prominent role played by the quantum geometry effects of loop quantum gravity (LQG). In particular, quantum geometry creates a brand new repulsive force which is totally negligible at low space-time curvature but rises very rapidly in the Planck regime, overwhelming the classical gravitational attraction and thereby resolving singularities of general relativity. Once singularities are resolved, the conceptual paradigm of cosmology changes and one has to revisit many of the standard issues—e.g., the "horizon problem"—from a new perspective.
Since LQG is based on a specific quantum theory of Riemannian geometry, geometric observables display a fundamental discreteness that play a key role in quantum dynamics: While predictions of LQC are very close to those of quantum geometrodynamics (QGD) away from the Planck regime, there is a dramatic difference once densities and curvatures enter the Planck scale. In LQC the Big Bang is replaced by a quantum bounce.
Study of LQC has led to many successes, including the emergence of a possible mechanism for cosmic inflation, resolution of gravitational singularities, as well as the development of effective semi-classical Hamiltonians.
This subfield originated in 1999 by Martin Bojowald, and further developed in particular by Abhay Ashtekar and Jerzy Lewandowski, as well as Tomasz Pawłowski and Parampreet Singh, et al. In late 2012 LQC represented a very active field in physics, with about three hundred papers on the subject published in the literature. There has also recently been work by Carlo Rovelli, et al. on relating LQC to spinfoam cosmology.
However, the results obtained in LQC are subject to the usual restriction that a truncated classical theory, then quantized, might not display the true behavi |
https://en.wikipedia.org/wiki/Mechatronics | Mechatronics engineering, also called mechatronics, is an interdisciplinary branch of engineering that focuses on the integration of mechanical engineering, electrical engineering, electronic engineering and software engineering, and also includes a combination of robotics, computer science, telecommunications, systems, control, and product engineering.
As technology advances over time, various subfields of engineering have succeeded in both adapting and multiplying. The intention of mechatronics is to produce a design solution that unifies each of these various subfields. Originally, the field of mechatronics was intended to be nothing more than a combination of mechanics, electrical and electronics, hence the name being a portmanteau of the words "mechanics" and "electronics"; however, as the complexity of technical systems continued to evolve, the definition had been broadened to include more technical areas.
The word mechatronics originated in Japanese-English and was created by Tetsuro Mori, an engineer of Yaskawa Electric Corporation. The word mechatronics was registered as trademark by the company in Japan with the registration number of "46-32714" in 1971. The company later released the right to use the word to the public, and the word began being used globally. Currently the word is translated into many languages and is considered an essential term for advanced automated industry.
Many people treat mechatronics as a modern buzzword synonymous with automation, robotics and electromechanical engineering.
French standard NF E 01-010 gives the following definition: "approach aiming at the synergistic integration of mechanics, electronics, control theory, and computer science within product design and manufacturing, in order to improve and/or optimize its functionality".
History
The word mechatronics was registered as trademark by the company in Japan with the registration number of "46-32714" in 1971. The company later released the right to use the word t |
https://en.wikipedia.org/wiki/MetaCASE%20tool | A metaCASE tool is a type of application software that provides the possibility to create one or more modeling methods, languages or notations for use within the process of software development. Often the result is a modeling tool for that language. MetaCASE tools are thus a kind of language workbench, generally considered as being focused on graphical modeling languages.
Another definition: MetaCASE tools are software tools that support the design and generation of CASE tools.
In general, metaCASE tools should provide generic CASE tool components that can be customised and instantiated into particular CASE tools.
The intent of metaCASE tools is to capture the specification of the required CASE tool and then generate the tool from the specification.
Overview
Quick CASE tools overview
Building large-scale software applications is very complicated process which is not easy to handle. Software companies must have good system of cooperation throughout developing teams and good displicine is highly required.
Nevertheless, using CASE tools is modern way how to speed up software development and ensure higher level of application design. However, there are another issues which has to be kept in mind. First of all usage of these tools doesn't guarantee good results because they are usually large, complex and extremely costly to produce and adopt.
CASE tools can be classified as either front-end or back-end tools depending on the phase of software development they are intended to support: for example, “Front-end’ analysis and design tools versus “Back-end” implementation tools. For a software engineers working on a particular application project, the choice of CASE tool would typically be
determined by factors such as size of project, methodology used, availability of tools, project budget, and numbers of people involved. For some applications, a suitable tool may not be available or the project may be too small to benefit from one.
CASE tools support a fixed number o |
https://en.wikipedia.org/wiki/Coxeter%20graph | In the mathematical field of graph theory, the Coxeter graph is a 3-regular graph with 28 vertices and 42 edges. It is one of the 13 known cubic distance-regular graphs. It is named after Harold Scott MacDonald Coxeter.
Properties
The Coxeter graph has chromatic number 3, chromatic index 3, radius 4, diameter 4 and girth 7. It is also a 3-vertex-connected graph and a 3-edge-connected graph. It has book thickness 3 and queue number 2.
The Coxeter graph is hypohamiltonian: it does not itself have a Hamiltonian cycle but every graph formed by removing a single vertex from it is Hamiltonian. It has rectilinear crossing number 11, and is the smallest cubic graph with that crossing number .
Construction
The simplest construction of a Coxeter graph is from a Fano plane. Take the 7C3 = 35 possible 3-combinations on 7 objects. Discard the 7 triplets that correspond to the lines of the Fano plane, leaving 28 triplets. Link two triplets if they are disjoint. The result is the Coxeter graph. (See image.) This construction exhibits the Coxeter graph as an induced subgraph of the odd graph O4, also known as the Kneser graph .
The Coxeter graph may also be constructed from the smaller distance-regular Heawood graph by constructing a vertex for each 6-cycle in the Heawood graph and an edge for each disjoint pair of 6-cycles.
The Coxeter graph may be derived from the Hoffman-Singleton graph. Take any vertex v in the Hoffman-Singleton graph. There is an independent set of size 15 that includes v. Delete the 7 neighbors of v, and the whole independent set including v, leaving behind the Coxeter graph.
Algebraic properties
The automorphism group of the Coxeter graph is a group of order 336. It acts transitively on the vertices, on the edges and on the arcs of the graph. Therefore, the Coxeter graph is a symmetric graph. It has automorphisms that take any vertex to any other vertex and any edge to any other edge. According to the Foster census, the Coxeter graph, referenced as F28 |
https://en.wikipedia.org/wiki/Oded%20Schramm | Oded Schramm (; December 10, 1961 – September 1, 2008) was an Israeli-American mathematician known for the invention of the Schramm–Loewner evolution (SLE) and for working at the intersection of conformal field theory and probability theory.
Biography
Schramm was born in Jerusalem. His father, Michael Schramm, was a biochemistry professor at the Hebrew University of Jerusalem.
He attended Hebrew University, where he received his bachelor's degree in mathematics and computer science in 1986 and his master's degree in 1987, under the supervision of Gil Kalai. He then received his PhD from Princeton University in 1990 under the supervision of William Thurston.
After receiving his doctorate, he worked for two years at the University of California, San Diego, and then had a permanent position at the Weizmann Institute from 1992 to 1999. In 1999 he moved to the Theory Group at Microsoft Research in Redmond, Washington, where he remained for the rest of his life.
He and his wife had two children, Tselil and Pele. Tselil is an assistant professor of statistics at Stanford University.
On September 1, 2008, Schramm fell to his death while scrambling Guye Peak, north of Snoqualmie Pass in Washington.
Research
A constant theme in Schramm's research was the exploration of relations between discrete models and their continuous scaling limits, which for a number of models turn out to be conformally invariant.
Schramm's most significant contribution was the invention of Schramm–Loewner evolution, a tool which has paved the way for mathematical proofs of conjectured scaling limit relations on models from statistical mechanics such as self-avoiding random walk and percolation. This technique has had a profound impact on the field. It has been recognized by many awards to Schramm and others, including a Fields Medal to Wendelin Werner, who was one of Schramm's principal collaborators, along with Gregory Lawler. The New York Times wrote in his obituary:
Schramm's doctorate |
https://en.wikipedia.org/wiki/Curvilinear%20perspective | Curvilinear perspective, also five-point perspective, is a graphical projection used to draw 3D objects on 2D surfaces. It was formally codified in 1968 by the artists and art historians André Barre and Albert Flocon in the book La Perspective curviligne, which was translated into English in 1987 as Curvilinear Perspective: From Visual Space to the Constructed Image and published by the University of California Press.
Curvilinear perspective is sometimes colloquially called fisheye perspective, by analogy to a fisheye lens. In computer animation and motion graphics, it may also be called tiny planet.
History
An early example of approximated five-point curvilinear perspective is within the Arnolfini Portrait (1434) by the Flemish Primitive Jan van Eyck. Later examples may be found in mannerist painter Parmigianino Self-portrait in a Convex Mirror (c. 1524) and A View of Delft (1652) by the Dutch Golden Age painter Carel Fabritius.
In 1959, Flocon had acquired a copy of Grafiek en tekeningen by M. C. Escher who strongly impressed him with his use of bent and curved perspective, which influenced the theory Flocon and Barre were developing. They started a long correspondence, in which Escher called Flocon a "kindred spirit".
Horizon and vanishing points
The system uses both curved perspective lines and an array of straight converging ones to approximate the image on the retina of the eye, which is itself spherical, more accurately than the traditional linear perspective, which only uses straight lines but is very distorted at the edges.
It uses either four, five or more vanishing points:
In five-point (fisheye) perspective: Four vanishing points are placed around in a circle, they are named N, W, S, E, plus one vanishing point in the center of the circle.
Four, or infinite-point perspective is the one that (arguably) most approximates the perspective of the human eye, while at the same time being effective for making impossible spaces, while five point is the |
https://en.wikipedia.org/wiki/Ibrahim%20ibn%20Sinan | Ibrahim ibn Sinan (Arabic: Ibrāhīm ibn Sinān ibn Thābit ibn Qurra, ; born 295296 AH/ in Baghdad, died: 334-335 AH/946 in Baghdad, aged 38) was a mathematician and astronomer who belonged to a family of scholars originally from Harran in northern Mesopotamia. He was the son of Sinan ibn Thabit (943) and the grandson of Thābit ibn Qurra (901). Like his grandfather, he belonged to a religious sect of star worshippers known as the Sabians of Harran.
Ibrahim ibn Sinan studied geometry, in particular tangents to circles. He made advances in the quadrature of the parabola and the theory of integration, generalizing the work of Archimedes, which was unavailable at the time. Ibrahim ibn Sinan is often considered to be one of the most important mathematicians of his time.
Notes
Sources
(PDF version)
Further reading
Reviews: Seyyed Hossein Nasr (1998) in Isis 89 (1) pp. 112-113; Charles Burnett (1998) in Bulletin of the School of Oriental and African Studies, University of London 61 (2) p. 406.
900s births
946 deaths
Year of birth uncertain
10th-century Arab people
10th-century people from the Abbasid Caliphate
10th-century mathematicians
10th-century astronomers
Medieval geometers
People from Baghdad
Mathematicians from the Abbasid Caliphate
Astronomers from the Abbasid Caliphate
Astronomers of the medieval Islamic world
Sabian scholars from the Abbasid Caliphate |
https://en.wikipedia.org/wiki/LED%20tattoo | A light-emitting diode tattoo is a type of body modification similar to a tattoo, but specifically involves implantation of technologically based materials versus traditional ink injection into the layers of the skin. LED tattoos are accomplished by a combination of silicon-silk technology and a miniature lighting device known as a light-emitting diode. While there is potential for many applications in the medical, commercial and personal domains, the technology is still in the development stage.
Technological limitations
Current medical devices are limited by their isolation from the body and their placement on rigid silicon. Current devices also contain gold and titanium which are required for electrical connections. Both gold and titanium are bio-compatible which means that they will not be rejected by the body as a foreign substance. However, biocompatibility is not as preferable as biodegradable because the latter does not leave behind any unnecessary materials; so researchers are working on biodegradable contacts to eliminate all remnants but the silicon. The current form of the LED tattoo has been implanted on mice without harm. Current research on silicon-silk technology is being conducted at the University of Pennsylvania's Engineering Department. Additionally, the Royal Philips Electronics of the Netherlands has shown commercial interest in the research of silicon silk technology, specifically LED tattoos as a means to extend the digital experience, or interactivity with the digital product.
Development
Future LED tattoos may use silicon chips that are around the length of a small grain of rice which has the dimensions of about 1 millimeters and just 250 nanometers thick. The chips are placed on thin films of silk, which cause the electronics to conform to biological tissue. This process is aided when saline solution is added, helping the silicon mold to the shape of the skin. Silk dissolves away over time, which can occur immediately after the operati |
https://en.wikipedia.org/wiki/Synthetic%20lethality | Synthetic lethality is defined as a type of genetic interaction where the combination of two genetic events results in cell death or death of an organism. Although the foregoing explanation is wider than this, it is common when referring to synthetic lethality to mean the situation arising by virtue of a combination of deficiencies of two or more genes leading to cell death (whether by means of apoptosis or otherwise), whereas a deficiency of only one of these genes does not. In a synthetic lethal genetic screen, it is necessary to begin with a mutation that does not result in cell death, although the effect of that mutation could result in a differing phenotype (slow growth for example), and then systematically test other mutations at additional loci to determine which, in combination with the first mutation, causes cell death arising by way of deficiency or abolition of expression.
Synthetic lethality has utility for purposes of molecular targeted cancer therapy. The first example of a molecular targeted therapeutic agent, which exploited a synthetic lethal approach, arose by means of an inactivated tumor suppressor gene (BRCA1 and 2), a treatment which received FDA approval in 2016 (PARP inhibitor). A sub-case of synthetic lethality, where vulnerabilities are exposed by the deletion of passenger genes rather than tumor suppressor is the so-called "collateral lethality".
Background
The phenomenon of synthetic lethality was first described by Calvin Bridges in 1922, who noticed that some combinations of mutations in the model organism Drosophila melanogaster (the common fruit fly) confer lethality. Theodore Dobzhansky coined the term "synthetic lethality" in 1946 to describe the same type of genetic interaction in wildtype populations of Drosophila. If the combination of genetic events results in a non-lethal reduction in fitness, the interaction is called synthetic sickness. Although in classical genetics the term synthetic lethality refers to the interaction b |
https://en.wikipedia.org/wiki/Rectum | The rectum (: rectums or recta) is the final straight portion of the large intestine in humans and some other mammals, and the gut in others. The adult human rectum is about long, and begins at the rectosigmoid junction (the end of the sigmoid colon) at the level of the third sacral vertebra or the sacral promontory depending upon what definition is used. Its diameter is similar to that of the sigmoid colon at its commencement, but it is dilated near its termination, forming the rectal ampulla. It terminates at the level of the anorectal ring (the level of the puborectalis sling) or the dentate line, again depending upon which definition is used. In humans, the rectum is followed by the anal canal which is about long, before the gastrointestinal tract terminates at the anal verge. The word rectum comes from the Latin rectum intestinum, meaning straight intestine.
Structure
The rectum is a part of the lower gastrointestinal tract. The rectum is a continuation of the sigmoid colon, and connects to the anus. The rectum follows the shape of the sacrum and ends in an expanded section called an ampulla where feces is stored before its release via the anal canal. An ampulla () is a cavity, or the dilated end of a duct, shaped like a Roman ampulla. The rectum joins with the sigmoid colon at the level of S3, and joins with the anal canal as it passes through the pelvic floor muscles.
Unlike other portions of the colon, the rectum does not have distinct taeniae coli. The taeniae blend with one another in the sigmoid colon five centimeters above the rectum, becoming a singular longitudinal muscle that surrounds the rectum on all sides for its entire length.
Blood supply and drainage
The blood supply of the rectum changes between the top and bottom portions. The top two thirds is supplied by the superior rectal artery. The lower third is supplied by the middle and inferior rectal arteries.
The superior rectal artery is a single artery that is a continuation of the inf |
https://en.wikipedia.org/wiki/Adaptive%20algorithm | An adaptive algorithm is an algorithm that changes its behavior at the time it is run, based on information available and on a priori defined reward mechanism (or criterion). Such information could be the story of recently received data, information on the available computational resources, or other run-time acquired (or a priori known) information related to the environment in which it operates.
Among the most used adaptive algorithms is the Widrow-Hoff’s least mean squares (LMS), which represents a class of stochastic gradient-descent algorithms used in adaptive filtering and machine learning. In adaptive filtering the LMS is used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean square of the error signal (difference between the desired and the actual signal).
For example, stable partition, using no additional memory is O(n lg n) but given O(n) memory, it can be O(n) in time. As implemented by the C++ Standard Library, stable_partition is adaptive and so it acquires as much memory as it can get (up to what it would need at most) and applies the algorithm using that available memory. Another example is adaptive sort, whose behavior changes upon the presortedness of its input.
An example of an adaptive algorithm in radar systems is the constant false alarm rate (CFAR) detector.
In machine learning and optimization, many algorithms are adaptive or have adaptive variants, which usually means that the algorithm parameters such as learning rate are automatically adjusted according to statistics about the optimisation thus far (e.g. the rate of convergence). Examples include adaptive simulated annealing, adaptive coordinate descent, adaptive quadrature, AdaBoost, Adagrad, Adadelta, RMSprop, and Adam.
In data compression, adaptive coding algorithms such as Adaptive Huffman coding or Prediction by partial matching can take a stream of data as input, and adapt their compression technique based on the symbols that th |
https://en.wikipedia.org/wiki/1964%20PRL%20symmetry%20breaking%20papers | The 1964 PRL symmetry breaking papers were written by three teams who proposed related but different approaches to explain how mass could arise in local gauge theories. These three papers were written by: Robert Brout and François Englert; Peter Higgs; and Gerald Guralnik, C. Richard Hagen, and Tom Kibble (GHK). They are credited with the theory of the Higgs mechanism and the prediction of the Higgs field and Higgs boson. Together, these provide a theoretical means by which Goldstone's theorem (a problematic limitation affecting early modern particle physics theories) can be avoided. They show how gauge bosons can acquire non-zero masses as a result of spontaneous symmetry breaking within gauge invariant models of the universe.
As such, these form the key element of the electroweak theory that forms part of the Standard Model of particle physics, and of many models, such as the Grand Unified Theory, that go beyond it. The papers that introduce this mechanism were published in Physical Review Letters (PRL) and were each recognized as milestone papers by PRLs 50th anniversary celebration. All of the six physicists were awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work, and in 2013 Englert and Higgs received the Nobel Prize in Physics.
On 4 July 2012, the two main experiments at the LHC (ATLAS and CMS) both reported independently the confirmed existence of a previously unknown particle with a mass of about (about 133 proton masses, on the order of 10−25 kg), which is "consistent with the Higgs boson" and widely believed to be the Higgs boson.
Introduction
A gauge theory of elementary particles is a very attractive potential framework for constructing the ultimate theory. Such a theory has the very desirable property of being potentially renormalizable—shorthand for saying that all calculational infinities encountered can be consistently absorbed into a few parameters of the theory. However, as soon as one gives mass to the gauge |
https://en.wikipedia.org/wiki/Per%20capita | Per capita is a Latin phrase literally meaning "by heads" or "for each head", and idiomatically used to mean "per person". The term is used in a wide variety of social sciences and statistical research contexts, including government statistics, economic indicators, and built environment studies.
It is commonly used in the field of statistics in place of saying "per person" (although per caput is the Latin for "per head").
It is also used in wills to indicate that each of the named beneficiaries should receive, by devise or bequest, equal shares of the estate. This is in contrast to a per stirpes division, in which each branch (Latin stirps, plural stirpes) of the inheriting family inherits an equal share of the estate. This is often used with the '2-0 rule', a statistical principle that determines which group is larger per capita. Under the 2-0 rule, a group is the largest per capita if it has both the biggest total size and size of the group of the objects in question, therefore resulting in a 2-0 score.
See also
Per capita income |
https://en.wikipedia.org/wiki/CUBIC | CUBIC (abbreviation for “clear, unobstructed brain/body imaging cocktails and computational analysis) is a histology method that allows tissues to be transparent (process called “tissue clearing”). As a result, it makes investigation of large biological samples with microscopy easier and faster.
The method was published in 2014 by Etsuo A. Susaki and Hiroki R. Ueda, primarily for use in neurobiology research of brains from model organisms like rodents or small primates. But in upcoming years there were other works published, using CUBIC method on other tissues like lymph nodes or mammary glands. CUBIC can be also combined with CLARITY-based tissue clearing methods.
Used chemicals and performing of method
The opacity of brain tissue is given mainly by light scattering on interfaces between environments with various refractive indexes, mainly between lipids and other tissue compounds. Therefore, partial delipidating and refractive index matching of tissue and surrounding medium is straightforward way to make tissue less opaque and therefore transparent – cleared.
Development of CUBIC pipeline was inspired by previously published clearing protocol named Scale (mixture of glycerol, urea and detergent), because of its simplicity and optimal compatibility with fluorescent proteins. Authors of CUBIC screened 40 chemicals corresponding to those used in Scale with aim to conserve compatibility with fluorescent reporters but achieve better and faster clearing of the tissue. They found that basic amino alcohols are ideally suited for this purpose, probably because amino groups effectively solvate phospholipids and basicity helps to preserve fluorescence signal. Amino alcohols have also beneficial effect when used for clearing of other tissues, which are mostly highly vascularized, and their opacity is given by absorption of light by hemoglobin on top of light scattering. Amino alcohols reduce pigmentation of those tissues very effectively by eluting the hem from hemoglobin |
https://en.wikipedia.org/wiki/RAB7L1 | RAB7, member RAS oncogene family-like 1 is a protein that in humans is encoded by the RAB7L1 gene. The gene is also known as RAB7L. RAB7L1 encodes a small GTP-binding protein and is a member of the Ras superfamily.
Model organisms
Model organisms have been used in the study of RAB7L1 function. A conditional knockout mouse line, called Rab7l1tm1a(EUCOMM)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists — at the Wellcome Trust Sanger Institute.
Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty one tests were carried out on mutant mice, but no significant abnormalities were observed. |
https://en.wikipedia.org/wiki/James%20Hutchison%20Stirling | James Hutchison Stirling (22 June 1820 – 19 March 1909) was a Scottish idealist philosopher and physician. His work The Secret of Hegel (1st edition, 1865, in 2 vols.; revised edition, 1898, in 1 vol.) gave great impetus to the study of Hegelian philosophy both in Britain and in the United States, and it was also accepted as an authoritative work on Hegel's philosophy in Germany and Italy. The book helped to create the philosophical movement known as British idealism.
Biography
James Hutchison Stirling was born in Glasgow, Scotland, the fifth son (and the youngest of six children) of William Stirling (died 14 March 1851) and Elizabeth Christie (d. 1828). William was a wealthy textile manufacturer who was a partner in the Glasgow firm of James Hutchison & Co., which manufactured muslin (lightweight cotton cloth in a plain weave, used for making sheets and for a variety of other purposes). William was known for his deeply-held religious views, many of which strongly influenced his son James.
Stirling studied at Young's Academy in Glasgow, followed by nine years of education (1833–1842) at the University of Glasgow, where he studied medicine, history, and classics. He became a Licentiate (1842, medical diploma) and Fellow (1860) of the Royal College of Surgeons of Edinburgh.
Stirling married Jane Hunter Mair (died 5 July 1903), an old family friend, on 28 April 1847 in Irvine, North Ayrshire, Scotland. The couple had seven children (five daughters and two sons), as follows: Jessie Jane Stirling (born 26 June 1850) (who married Rev. Robert Armstrong of Glasgow), Elizabeth Margaret Stirling (11 February 1852 – 1871), Amelia Hutchison Stirling, Florence Hutchison Stirling (1858 – 6 May 1948), Lucy Stirling, William Stirling, and David Stirling. Stirling's daughter Amelia wrote many books on historical subjects, and she was the joint-translatorwith William Hale White (1831–1913)of Spinoza's Ethics (1883). She also wrote a biography of her father James titled James Hut |
https://en.wikipedia.org/wiki/Aloy | Aloy is a fictional character and protagonist of the 2017 video game Horizon Zero Dawn and its sequel Horizon Forbidden West. In the games' post-apocalyptic tribal setting, she is born in 3021, raised as an outcast, and trains as a warrior in order to win a ritual competition to discover her mother's identity. After narrowly evading an assassination attempt, she embarks on a journey to stop a cult that worships an artificial intelligence bent on the world's destruction, while also hunting machines that have grown hostile to humans. She has been critically praised for her design and characterization. She is voiced by American voice actress Ashly Burch and modeled after Dutch actress Hannah Hoekstra.
Character development
Aloy was created as a character who could provide many tactical options in battle. As a "hunter at heart", she has little compassion for machines beyond a "hunter's respect". She also has a gritty personality, disliking "comforts or ease", and tends to be "very forthright and blunt, sometimes even confrontational" in how she addresses issues. Guerrilla Games always envisioned the game as starring a strong female character, with game director Mathijs de Jonge citing Sarah Connor from Terminator, Ellen Ripley from Alien, and Ygritte from Game of Thrones as influences. Sony decided to do rigorous market testing, believing that adding a female lead might be a "risk", but approved her as the lead role. The developers were aware of the strong female character cliche and tried to make a more human character with a more interesting personality. Aloy's physical likeness was based on the Dutch actress Hannah Hoekstra.
Appearances
Horizon Zero Dawn
Aloy is first introduced as an infant who has been entrusted to Rost's care, a man who has been cast out by the Nora tribe. Six years later, while exploring on her own, she falls into a forbidden bunker that was created by ancient humans. While trying to escape from the destroyed facility, she discovers an augment |
https://en.wikipedia.org/wiki/Clone%20%28cell%20biology%29 | A clone is a group of identical cells that share a common ancestry, meaning they are derived from the same cell.
Clonality implies the state of a cell or a substance being derived from one source or the other. Thus there are terms like polyclonal—derived from many clones; oligoclonal—derived from a few clones; and monoclonal—derived from one clone. These terms are most commonly used in context of antibodies or immunocytes.
Contexts
This concept of clone assumes importance as all the cells that form a clone share common ancestry, which has a very significant consequence: shared genotype.
One of the most prominent usage is in describing a clone of B cells. The B cells in the body have two important phenotypes (functional forms)—the antibody secreting, terminally differentiated (that is, they cannot divide further) plasma cells, and the memory and the naive cells—both of which retain their proliferative potential.
Another important area where one can talk of "clones" of cells is neoplasms. Many of the tumors derive from one (sufficiently) mutated cell, so they are technically a single clone of cells. However, during course of cell division, one of the cells can get mutated further and acquire new characteristics to diverge as a new clone. However, this view of cancer onset has been challenged in recent years and many tumors have been argued to have polyclonal origin, i.e. derived from two or more cells or clones, including malignant mesothelioma.
All the granulosa cells in a Graafian follicle are in fact clones.
Paroxysmal nocturnal hemoglobinuria is a disorder of bone marrow cells resulting in shortened life of red blood cells, which is also a result of clonal expansion, i.e., all the altered cells are originally derived from a single cell, which also somewhat compromises the functioning of other "normal" bone marrow cells.
Basis of clonal proliferation
Most other cells cannot divide indefinitely as after a few cycles of cell division the cells stop express |
https://en.wikipedia.org/wiki/Dolby%20Voice | Dolby Voice is an audio communication technology developed by Dolby Laboratories since at least 2012.
This solution is aimed at improving audio quality in virtual environments such as entreprise-level videoconferencing.
It is implemented using commercially available hardware and/or software and uses the proprietary Dolby Voice Codec (DVC) audio codec.
Features
This technology was created to improve audio quality, compared to other similar technologies through various audio processing features:
a dynamic audio leveling to focus on the human voice, and to equalize participants audio power easing listening
a spatialization of audio to improve voice clarity and reduce fatigue by preventing speech overlapping when multiple participants are talking at the same time
a noise reduction to limit unwanted background sounds in noisy environments
an echo reduction to limit audio reinjection when input and output devices are placed close together
while, at the same time, avoiding to cut back on bandwidth usage and network resilience, through the use of heavy compression.
Products
Dolby.io platform
Dolby Conference Phone
Dolby Voice Room
BlueJeans application
Laptops |
https://en.wikipedia.org/wiki/Corresponding%20sides%20and%20corresponding%20angles | In geometry, the tests for congruence and similarity involve comparing corresponding sides and corresponding angles of polygons. In these tests, each side and each angle in one polygon is paired with a side or angle in the second polygon, taking care to preserve the order of adjacency.
For example, if one polygon has sequential sides , , , , and and the other has sequential sides , , , , and , and if and are corresponding sides, then side (adjacent to ) must correspond to either or (both adjacent to ). If and correspond to each other, then corresponds to , corresponds to , and corresponds to ; hence the th element of the sequence corresponds to the th element of the sequence for On the other hand, if in addition to corresponding to we have corresponding to , then the th element of corresponds to the th element of the reverse sequence .
Congruence tests look for all pairs of corresponding sides to be equal in length, though except in the case of the triangle this is not sufficient to establish congruence (as exemplified by a square and a rhombus that have the same side length). Similarity tests look at whether the ratios of the lengths of each pair of corresponding sides are equal, though again this is not sufficient. In either case equality of corresponding angles is also necessary; equality (or proportionality) of corresponding sides combined with equality of corresponding angles is necessary and sufficient for congruence (or similarity). The corresponding angles as well as the corresponding sides are defined as appearing in the same sequence, so for example if in a polygon with the side sequence and another with the corresponding side sequence we have vertex angle appearing between sides and then its corresponding vertex angle must appear between sides and . |
https://en.wikipedia.org/wiki/Telepen | Telepen the a name of a barcode symbology designed to express all 128 ASCII characters without using shift characters for code switching, and using only two different widths for bars and spaces. (Unlike Code 128, which uses shifts and four different element widths.) The symbology was devised by George Sims of SB Electronic Systems Ltd. Telepen was originally designed in the UK in 1972.
Unlike most linear barcodes, Telepen does not define independent encodings for each character, but instead operates on a stream of bits. It is able to represent any bit stream containing an even number of 0 bit, and is applied to ASCII bytes with even parity, which satisfies that rule. Bytes are encoded in little-endian bit order.
The string of bits is divided into 1 bit, and blocks of the form 01*0. That is, blocks beginning an ending with a 0 bit, with any number of 1 bits in between.
These are then encoded as follows:
"1" is encoded as a narrow bar-narrow space
"00" is encoded as a wide bar-narrow space
"010" is encoded as wide bar-wide space
Otherwise, the leading "01" and trailing "10" are both encoded as narrow bar-wide space, with additional 1 bit in between coded as described above.
Wide elements are 3 times the width of narrow elements, so every bit occupies 2 narrow elements of space.
Barcodes always start with ASCII _ (underscore). This has code 0x5F, so the (lsbit-first) bit stream is 11111010. Thus, it is represented as 5 narrow bar/narrow space pairs, followed by a wide bar/wide space.
Barcodes always end with ASCII z. This has (including parity) code 0xFA, so the (lsbit-first) bit stream is 01011111. This is encoded as a wide bar/wide space, followed by 5 narrow bar/narrow space pairs. Each end of the bar code consists of repeated narrow elements terminated by a pair of wide elements, but the start has a wide bar first, while if the code is read in reverse, the wide space will be encountered first.
In addition to per-character parity bits, a Telepen |
https://en.wikipedia.org/wiki/Payment%20card%20industry | The payment card industry (PCI) denotes the debit, credit, prepaid, e-purse, ATM, and POS cards and associated businesses.
Overview
The payment card industry consists of all the organizations which store, process and transmit cardholder data, most notably for debit cards and credit cards. The security standards are developed by the Payment Card Industry Security Standards Council which develops the Payment Card Industry Data Security Standards used throughout the industry. Individual card brands establish compliance requirements that are used by service providers and have their own compliance programs. Major card brands include American Express, Discover Financial Services, Japan Credit Bureau, Mastercard, RuPay, UnionPay and Visa. Most companies use member banks that connect and accept transactions from the card brands. Not all card brands use member banks, like American Express, these instead act as their own bank.
, the United States uses a magnetic stripe on a card to process transactions and its security relies on the holder's signature and visual inspection of the card to check for features such as hologram. This system will be outmoded and replaced by EMV in 2015. EMV is a global standard for inter-operation of integrated circuit cards (IC cards or "chip cards") and IC card capable point of sale (POS) terminals and automated teller machines (ATMs), for authenticating credit and debit card transactions. It has enhanced security features, but is still susceptible to fraud.
Payment Card Industry Security Standards Council
On 7 September 2006, American Express, Discover Financial Services, Japan Credit Bureau, Mastercard and Visa International formed the Payment Card Industry Security Standards Council (PCI SSC) security council with the goal of managing the ongoing evolution of the Payment Card Industry Data Security Standard. The council itself claims to be independent of the various card vendors that make up the council. As of 1 August 2014, the PCI SSC |
https://en.wikipedia.org/wiki/Sesame%20allergy | A food allergy to sesame (Sesamum indicum) seeds has prevalence estimates in the range of 0.1–0.2% of the general population, and are higher in the Middle East and other countries where sesame seeds are used in traditional foods. Reporting of sesame seed allergy has increased in the 21st century, either due to a true increase from exposure to more sesame foods or due to an increase in awareness. Increasing sesame allergy rates have induced more countries to regulate food labels to identify sesame ingredients in products and the potential for allergy. In the United States, sesame became the ninth food allergen with mandatory labeling, effective 1 January 2023.
The allergic reaction is an immune hypersensitivity to proteins and lipophilic proteins in sesame seeds and foods made with sesame seeds, including food-grade sesame oil. Symptoms can be either rapid or gradual in onset, occurring over minutes to days. Rapid allergic reaction may include anaphylaxis, a potentially life-threatening condition requiring treatment with epinephrine. Other, slower presentations may include atopic dermatitis or inflammation of the esophagus. For food labeling requirements established in many countries, sesame labeling is required in addition to the eight most common food allergens, responsible for 90% of allergic reactions to foods: cow's milk, eggs, wheat, shellfish, peanuts, tree nuts, fish, and soy beans.
In addition to water-soluble allergenic proteins, sesame seeds share with peanuts and hazelnuts a class of allergenic proteins known as oleosins. Commercially prepared sesame extracts lack these lipophilic proteins, and so can be the reason for false negative skin prick test results even though the oleosins can be responsible for a range of allergic reactions, including anaphylactic shock. Unlike early childhood allergic reactions to milk and eggs, which often lessen as children age, sesame allergy persists into older childhood and adulthood; an estimated 20–30% of affected peop |
https://en.wikipedia.org/wiki/Step%20potential | In quantum mechanics and scattering theory, the one-dimensional step potential is an idealized system used to model incident, reflected and transmitted matter waves. The problem consists of solving the time-independent Schrödinger equation for a particle with a step-like potential in one dimension. Typically, the potential is modeled as a Heaviside step function.
Calculation
Schrödinger equation and potential function
The time-independent Schrödinger equation for the wave function is
where Ĥ is the Hamiltonian, ħ is the reduced Planck constant, m is the mass, E the energy of the particle. The step potential is simply the product of V0, the height of the barrier, and the Heaviside step function:
The barrier is positioned at x = 0, though any position x0 may be chosen without changing the results, simply by shifting position of the step by −x0.
The first term in the Hamiltonian, is the kinetic energy of the particle.
Solution
The step divides space in two parts: x < 0 and x > 0. In any of these parts the potential is constant, meaning the particle is quasi-free, and the solution of the Schrödinger equation can be written as a superposition of left and right moving waves (see free particle)
where subscripts 1 and 2 denote the regions x < 0 and x > 0 respectively, the subscripts (→) and (←) on the amplitudes A and B denote the direction of the particle's velocity vector: right and left respectively.
The wave vectors in the respective regions being
both of which have the same form as the De Broglie relation (in one dimension)
.
Boundary conditions
The coefficients A, B have to be found from the boundary conditions of the wave function at x = 0. The wave function and its derivative have to be continuous everywhere, so:
Inserting the wave functions, the boundary conditions give the following restrictions on the coefficients
Transmission and reflection
It is useful to compare the situation to the classical case. In both cases, the particle behaves as a fr |
https://en.wikipedia.org/wiki/Sergey%20Kitaev | Sergey Kitaev (Russian: Сергей Владимирович Китаев; born 1 January 1975 in Ulan-Ude) is a Professor of Mathematics at the University of Strathclyde, Glasgow, Scotland.
He obtained his Ph.D. in mathematics from the University of Gothenburg in 2003 under the supervision of Einar Steingrímsson.
Kitaev's research interests concern aspects of combinatorics and graph theory.
Contributions
Kitaev is best known for his book Patterns in permutations and words (2011), an introduction to the field of permutation patterns.
He is also the author (with Vadim Lozin) of Words and graphs (2015) on the theory of word-representable graphs which he pioneered.
Kitaev has written over 120 research articles in mathematics.
Of particular note is his work generalizing vincular patterns to having partially ordered entries, a classification (with Anders Claesson) of bijections between 321- and 132-avoiding permutations, and a solution (with Steve Seif) of the word problem for the Perkins semigroup, as well as his work on word-representable graphs.
Selected publications
External links
Sergey Kitaev's page at the University of Strathclyde |
https://en.wikipedia.org/wiki/Cryptocurrency%20wallet | A cryptocurrency wallet is a device, physical medium, program or a service which stores the public and/or private keys for cryptocurrency transactions. In addition to this basic function of storing the keys, a cryptocurrency wallet more often offers the functionality of encrypting and/or signing information. Signing can for example result in executing a smart contract, a cryptocurrency transaction (see "bitcoin transaction" image), identification, or legally signing a 'document' (see "application form" image).
History
In 2008 bitcoin was introduced as the first cryptocurrency following the principle outlined by Satoshi Nakamoto in the paper “Bitcoin: A Peer-to-Peer Electronic Cash System.” The project was described as an electronic payment system using cryptographic proof instead of trust. It also mentioned using cryptographic proof to verify and record transactions on a blockchain.
Technology
Private and public key generation
A cryptocurrency wallet works by a theoretical or random number being generated and used with a length that depends on the algorithm size of the cryptocurrency's technology requirements. The number is converted to a private key using the specific requirements of the cryptocurrency cryptography algorithm requirement. A public key is then generated from the private key using whichever cryptographic algorithm is required. The private key is used by the owner to access and send cryptocurrency and is private to the owner, whereas the public key is to be shared to any third party to receive cryptocurrency.
Up to this stage no computer or electronic device is required and all key pairs can be mathematically derived and written down by hand. The private key and public key pair (known as an address) are not known by the blockchain or anyone else. The blockchain will only record the transaction of the public address when cryptocurrency is sent to it, thus recording in the blockchain ledger the transaction of the public address.
Duplicate private |
https://en.wikipedia.org/wiki/Sue%20Whitesides | Sue Hays Whitesides is a Canadian mathematician and computer scientist, a professor emeritus of computer science and the chair of the computer science department at the University of Victoria in British Columbia, Canada. Her research specializations include computational geometry and graph drawing.
Education and career
Whitesides received her Ph.D. in mathematics in 1975 from the University of Wisconsin–Madison, under the supervision of Richard Bruck. Before joining the University of Victoria faculty, she taught at Dartmouth College and McGill University; at McGill, she was director of the School of Computer Science from 2005 to 2008.
Service
Whitesides was the program chair for the 1998 International Symposium on Graph Drawing and program co-chair for the 2012 Symposium on Computational Geometry. |
https://en.wikipedia.org/wiki/89th%20meridian%20west | The Meridian 89° West of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, North America, the Gulf of Mexico, Central America, the Pacific Ocean, the Southern Ocean, and Antarctica to the South Pole.
The 89th meridian west forms a great circle with the 91st meridian east.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 89th meridian west passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="120" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Nunavut — Ellesmere Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Nansen Sound
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Nunavut — Axel Heiberg Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Norwegian Bay
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Nunavut — Ellesmere Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Jones Sound
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Nunavut — Devon Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Lancaster Sound
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Prince Regent Inlet
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Nunavut — Baffin Island
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Gulf of Boothia
| style="background:#b0e0e6;" |
|-
|
! scope="row" |
| Nunavut — mainland
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Hudson Bay
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Manitoba — for about 1 km Ontario — from
|-
| style="background:#b0e0e6 |
https://en.wikipedia.org/wiki/Hager%20Group | Hager Group is a manufacturer of electrical installations in residential, commercial and industrial buildings based in Blieskastel, Germany. The company has been family-run and owned ever since its foundation in 1955.
Hager Group provides products and services ranging from energy distribution and cable management to intelligent building automation and security systems, under the brand Hager. Hager Group also owns the brands Berker, Bocchiotti, Daitem, Diagral, Elcom and E3/DC. In 2018, Hager Group was the world market leader in electrical installation systems. In August 2019, the group was ranked number 128 in the top 500 family-owned businesses in Germany according to the magazine Die Deutsche Wirtschaft.
History
In 1955, Hager oHG, elektrotechnische Fabrik was founded by brothers Oswald and Hermann Hager, together with their father Peter Hager in Ensheim in the Saarland region of Germany. Since 1945, Saarland had been under the economic control of France and had no access to the German market. However, Hager wanted to gain a foothold in both markets. In 1959, the Hager brothers founded their first foreign subsidiary, Hager Electro S. A., in Obernai, Alsace, in north-eastern France.
In 1966, Hager began systematical training of its electricians, whose expertise has created a culture of customer loyalty, something that continues to this day. Hager’s modular rotary fuse carrier was patented in Germany in 1968 and in France in 1970. At the same time, the first mass-produced distribution board, the Hager-Rapid-System, was launched on the French market. In 1973, Hager achieved sales of 43 million Deutsche Marks in Germany and in 1974 the company reached a turnover of 22 million francs in France.
In 1976, Hager launched the mini Gamma enclosure, in 1982 the company started producing the first Residual-current circuit breakers (RCCB) in Germany. A new production facility with a high-bay warehouse was opened in Blieskastel.
Hager Group began to market itself as a |
https://en.wikipedia.org/wiki/Tail%20call | In computer science, a tail call is a subroutine call performed as the final action of a procedure. If the target of a tail is the same subroutine, the subroutine is said to be tail recursive, which is a special case of direct recursion. Tail recursion (or tail-end recursion) is particularly useful, and is often easy to optimize in implementations.
Tail calls can be implemented without adding a new stack frame to the call stack. Most of the frame of the current procedure is no longer needed, and can be replaced by the frame of the tail call, modified as appropriate (similar to overlay for processes, but for function calls). The program can then jump to the called subroutine. Producing such code instead of a standard call sequence is called tail-call elimination or tail-call optimization. Tail-call elimination allows procedure calls in tail position to be implemented as efficiently as goto statements, thus allowing efficient structured programming. In the words of Guy L. Steele, "in general, procedure calls may be usefully thought of as GOTO statements which also pass parameters, and can be uniformly coded as [machine code] JUMP instructions."
Not all programming languages require tail-call elimination. However, in functional programming languages, tail-call elimination is often guaranteed by the language standard, allowing tail recursion to use a similar amount of memory as an equivalent loop. The special case of tail-recursive calls, when a function calls itself, may be more amenable to call elimination than general tail calls. When the language semantics do not explicitly support general tail calls, a compiler can often still optimize sibling calls, or tail calls to functions which take and return the same types as the caller.
Description
When a function is called, the computer must "remember" the place it was called from, the return address, so that it can return to that location with the result once the call is complete. Typically, this information is saved |
https://en.wikipedia.org/wiki/Blackett%20Laboratory | The Blackett Laboratory is part of the Imperial College Faculty of Natural Sciences and has housed the Department of Physics at Imperial College London since its completion in 1961. Named after experimental physicist Patrick Blackett who established a laboratory at the college, the building is located on the corner of Prince Consort Road and Queen's Gate, Kensington. The department ranks 11th on QS's 2018 world university rankings.
History
The Department of Physics at Imperial College dates back to the physics department of the Normal School of Science, later the Royal College of Science. As part of the formation of Imperial, the Royal College was moved into a new building at South Kensington in 1906, which also housed the Chemistry Department. From 1906 to 1932 the head of the Physics Department was Prof. H. L. Callender, famous for his work on the properties of steam.
G P Thomson (son of J J Thomson) replaced Callender in 1932, and worked in part on nuclear physics and atomic weaponry. He was followed P. M. S. Blackett as head in around 1953, with the construction of the new Physics building starting at about the same time. Blackett refocused efforts from low-energy physics to high-energy nuclear physics, and under him research started into cosmic rays and the use of bubble chambers, with the first liquid hydrogen bubble chamber in Western Europe being constructed at the department. Work on satellite instrumentation resulted in departmentally designed equipment ending up on NASA satellites in 1962. Physics continued in the new “old” RCS building until the new Physics building was finished in 1961, which subsequently became known as the Blackett Laboratory. In 2019, it was announced that the laboratory would be constructing a magnetometer instrument for the European Space Agency Solar Orbiter in a mission to study the Sun. The satellite launched on an Atlas V from the Cape Canaveral AFS, Florida on the 9 February, 2020. In November 2019, a team of Imperial physic |
https://en.wikipedia.org/wiki/Agroecosystem | Agroecosystems are the ecosystems supporting the food production systems in farms and gardens. As the name implies, at the core of an agroecosystem lies the human activity of agriculture. As such they are the basic unit of study in Agroecology, and Regenerative Agriculture using ecological approaches.
Like other ecosystems, agroecosystems form partially closed systems in which animals, plants, microbes, and other living organisms and their environment are interdependent and regularly interact. They are somewhat arbitrarily defined as a spatially and functionally coherent unit of agricultural activity.
An agroecosystem can be seen as not restricted to the immediate site of agricultural activity (e.g. the farm). That is, it includes the region that is impacted by this activity, usually by changes to the complexity of species assemblages and energy flows, as well as to the net nutrient balance. Agroecosystems, particularly those managed intensively, are characterized as having simpler species composition, energy and nutrient flows than "natural" ecosystems. Likewise, agroecosystems are often associated with elevated nutrient input, much of which exits the farm leading to eutrophication of connected ecosystems not directly engaged in agriculture.
Utilization
Forest gardens are probably the world's oldest and most resilient agroecosystem. Forest gardens originated in prehistoric times along jungle-clad river banks and in the wet foothills of monsoon regions. In the gradual process of a family improving their immediate environment, useful tree and vine species were identified, protected and improved whilst undesirable species were eliminated. Eventually superior foreign species were selected and incorporated into the family's garden.
Some major organizations are hailing farming within agroecosystems as the way forward for mainstream agriculture. Current farming methods have resulted in over-stretched water resources, high levels of erosion and reduced soil fertility. |
https://en.wikipedia.org/wiki/%E2%88%9E-groupoid | In category theory, a branch of mathematics, an ∞-groupoid is an abstract homotopical model for topological spaces. One model uses Kan complexes which are fibrant objects in the category of simplicial sets (with the standard model structure). It is an ∞-category generalization of a groupoid, a category in which every morphism is an isomorphism.
The homotopy hypothesis states that ∞-groupoids are equivalent to spaces up to homotopy.
Globular Groupoids
Alexander Grothendieck suggested in Pursuing Stacks that there should be an extraordinarily simple model of ∞-groupoids using globular sets, originally called hemispherical complexes. These sets are constructed as presheaves on the globular category . This is defined as the category whose objects are finite ordinals and morphisms are given bysuch that the globular relations holdThese encode the fact that -morphisms should not be able to see -morphisms. When writing these down as a globular set , the source and target maps are then written asWe can also consider globular objects in a category as functorsThere was hope originally that such a strict model would be sufficient for homotopy theory, but there is evidence suggesting otherwise. It turns out for its associated homotopy -type can never be modeled as a strict globular groupoid for . This is because strict ∞-groupoids only model spaces with a trivial Whitehead product.
Examples
Fundamental ∞-groupoid
Given a topological space there should be an associated fundamental ∞-groupoid where the objects are points , 1-morphisms are represented as paths, 2-morphisms are homotopies of paths, 3-morphisms are homotopies of homotopies, and so on. From this infinity groupoid we can find an -groupoid called the fundamental -groupoid whose homotopy type is that of .
Note that taking the fundamental ∞-groupoid of a space such that is equivalent to the fundamental n-groupoid . Such a space can be found using the Whitehead tower.
Abelian globular groupoids
One usefu |
https://en.wikipedia.org/wiki/STAT3 | Signal transducer and activator of transcription 3 (STAT3) is a transcription factor which in humans is encoded by the STAT3 gene. It is a member of the STAT protein family.
Function
STAT3 is a member of the STAT protein family. In response to cytokines and growth factors, STAT3 is phosphorylated by receptor-associated Janus kinases (JAK), forms homo- or heterodimers, and translocates to the cell nucleus where it acts as a transcription activator. Specifically, STAT3 becomes activated after phosphorylation of tyrosine 705 in response to such ligands as interferons, epidermal growth factor (EGF), Interleukin (IL-)5 and IL-6. Additionally, activation of STAT3 may occur via phosphorylation of serine 727 by Mitogen-activated protein kinases (MAPK) and through c-src non-receptor tyrosine kinase. STAT3 mediates the expression of a variety of genes in response to cell stimuli, and thus plays a key role in many cellular processes such as cell growth and apoptosis.
STAT3-deficient mouse embryos cannot develop beyond embryonic day 7, when gastrulation begins. It appears that at these early stages of development, STAT3 activation is required for self-renewal of embryonic stem cells (ESCs). Indeed, LIF, which is supplied to murine ESC cultures to maintain their undifferentiated state, can be omitted if STAT3 is activated through some other means.
STAT3 is essential for the differentiation of the TH17 helper T cells, which have been implicated in a variety of autoimmune diseases. During viral infection, mice lacking STAT3 in T-cells display impairment in the ability to generate T-follicular helper (Tfh) cells and fail to maintain antibody based immunity.
STAT3 caused upregulation in E-selectin, a factor in metastasis of cancers.
Hyperactivation of STAT3 occurs in COVID-19 infection and other viral infections.
Clinical significance
Loss-of-function mutations in the STAT3 gene result in Hyperimmunoglobulin E syndrome, associated with recurrent infections as well as disor |
https://en.wikipedia.org/wiki/Flying%20toilet | A flying toilet is a facetious name for a plastic bag that is used as a simple collection device for human faeces when there is a lack of proper toilets and people are forced to practise open defecation. The filled and tied plastic bags are then discarded in ditches or on the roadside. Associated especially with slums, they are called flying toilets "because when you have filled them, you throw them as far away as you can".
Usage
Flying toilets are particularly associated with slums surrounding Nairobi, Kenya, especially Kibera. According to a report from the United Nations Development Programme launched in Cape Town on 9 November 2006, "two in three people [in Kibera] identify the flying toilet as the primary mode of excreta disposal available to them." This contradicts a Kenyan government report which indicates that 99% of Nairobi residents have access to a sanitation service. The UNDP report blames a taboo against bureaucrats and politicians discussing toilets, while others see a reluctance among the Nairobi authorities to formalise what they characterise as an "illegal settlement".
A related concept in the United States is the "trucker bomb", described in a media report as the trucking industry practice of urinating into plastic bottles and throwing them from the vehicle as an alternative to stopping the truck or using facilities at rest stops.
Problems
Piles of polyethene bags gather on roofs and attract flies. Some of them burst open upon impact and/or clog drainage systems. If they land on fractured water pipes, a drop in water pressure can cause the contents to be sucked into the water system. People can also be hit by the bags as they are blindly tossed. In the rainy season, drainage contaminated with excrement can enter residences; some children even swim in it. Such close contact leads to fears of diseases such as diarrhoea, skin disorders, typhoid fever and malaria.
The practice of defecating outside, away from one's house, especially in the dark, |
https://en.wikipedia.org/wiki/Thursday%20Night%20Football | Thursday Night Football (often abbreviated as TNF) is the branding used for broadcasts of National Football League (NFL) games that broadcast primarily on Thursday nights. Most of the games kick off at 8:15 Eastern Time (8:20 prior to 2022 and 8:25 prior to 2018).
In the past, games in the package also air occasionally on Saturdays in the later portion of the season, as well as select games from the NFL International Series (these games were branded since 2017 as NFL Network Special).
Debuting on November 23, 2006, the telecasts were originally part of NFL Network's Run to the Playoffs package, which consisted of eight total games broadcast on Thursday and Saturday nights (five on Thursdays, and three on Saturdays, originally branded as Saturday Night Football) during the latter portion of the season. Since 2012, the TNF package has begun during the second week of the NFL season; the NFL Kickoff Game and the NFL on Thanksgiving are both broadcast as part of NBC Sports' Sunday Night Football contract and are not included in Thursday Night Football, although the Thanksgiving primetime game was previously part of the package from 2006 until 2011.
In 2014, the NFL shifted the package to a new model to increase its prominence. The entire TNF package would be produced by a separate rightsholder, who would hold rights to simulcast a portion of the package on their respective network. CBS was the first rightsholder under this model, airing nine games on broadcast television, and producing the remainder of the package to air exclusively on NFL Network to satisfy its carriage agreements. The package was also extended to Week 16 of the season, and included a new Saturday doubleheader split between CBS and NFL Network. On January 18, 2015, CBS and NFL Network extended the same arrangement for a second season. In 2016 and 2017, the NFL continued with a similar arrangement, but adding NBC as a second rightsholder alongside CBS, with each network airing five games on broadcast |
https://en.wikipedia.org/wiki/Scrambler%20mouse | Scrambler is a spontaneous mouse mutant lacking a functional DAB1 gene, resulting in a phenotype resembling that seen in the reeler mouse. The strain was first described by Sweet et al. in 1996.
Neuroanatomical abnormalities
The spontaneous autosomal recessive scrambler mutation on chromosome 4 causes a deficiency of DAB1, encoding disabled-1, a protein involved in the signaling of the Reelin protein, lacking in the reeler mutant, Dab1-scm homozygous mutants possess a reeler-like phenotype with respect to cell malpositioning in cerebellar cortex, hippocampus, and neocortex. Purkinje cell and granule cell degeneration results in ataxia. Despite normal Reln mRNA levels, Dab1-scm mutants have defective reelin signaling, indicating that disabled-1 acts downstream of reelin. Cell ectopias are identical with targeted disruption of Dab1.
Behavioral abnormalities
Dab1-scm mutants have a widespread gait obvious to the naked eye (ataxia). In their home-cage, they often reel and fall, especially when attempting to rear up against the walls. Nevertheless, the mutants are fertile, and so can be reproduced from one generation to the next. Relative to non-ataxic controls of the same background strain, Dab1-scm mutants were impaired in the Rotarod Performance test of motor coordination and a grid-climbing test. When picked up by the tail, they show a pathological reflex, limb-clasping, characterized by holding together fore- or hind-limbs, or all four together in a bat-like posture.
Dab1-scm mutants were distinguished from non-ataxic controls as early as postnatal day 8 based on body tremor, gait anomalies, and body weight. On postnatal day 15, motor coordination deficits were evident on horizontal bar and inclined or vertical grid tests in association with a weaker grip strength. Further differences were detected on postnatal day 22 and evaluation at the adult age revealed impairments indicative of permanent motor alterations.
As adults, Dab1(scm) mutants showed motor coordin |
https://en.wikipedia.org/wiki/Ambiguous%20viewpoint | Object-oriented analysis and design (OOAD) models are often presented without clarifying the viewpoint represented by the model. By default, these models denote an implementation viewpoint that visualises the structure of a computer program. Mixed viewpoints do not support the fundamental separation of interfaces from implementation details, which is one of the primary benefits of the object-oriented paradigm.
In object-oriented analysis and design there are three viewpoints: The business viewpoint (the information that is domain specific and matters to the end user), the specification viewpoint (which defines the exposed interface elements of a class), and the implementation viewpoint (which deals with the actual internal implementation of the class). If the viewpoint becomes mixed then these elements will blend together in a way which makes it difficult to separate out and maintain the internals of an object without changing the interface, one of the core tenets of object-oriented analysis and design.
See also
Class-Responsibility-Collaboration card
Unified Modeling Language
Object-oriented analysis
Object-oriented design |
https://en.wikipedia.org/wiki/Micronova | A micronova is a type of thermonuclear explosion on the surface of a white dwarf much smaller than the strength of a nova; being about in strength, about a millionth that of a typical nova. The phenomenon was first described in April 2022.
History
A team led by Durham University researchers announced on 20 April 2022 that they identified three micronovae using data from the Transiting Exoplanet Survey Satellite (TESS). The team discovered with TESS that two of the micronovae occurred on white dwarfs, with the astronomers confirming with the Very Large Telescope that the third occurred on a white dwarf as well.
The phenomenon had previously been observed in the white dwarf binary TV Columbae using data from the International Ultraviolet Explorer. However the data was not sufficient to infer the physical mechanism behind the explosion.
Formation
Micronovae specifically form on white dwarfs that have strong magnetic fields, as fields send material toward the star's magnetic poles. This causes the hydrogen fusion explosions on the surface to be more localized and small than a typical nova. |
https://en.wikipedia.org/wiki/Adaptogen | Adaptogens or adaptogenic substances are used in herbal medicine for the purported stabilization of physiological processes and promotion of homeostasis.
History
The term "adaptogens" was coined in 1947 by Soviet toxicologist Nikolai Lazarev to describe substances that may increase resistance to stress. The term "adaptogenesis" was later applied in the Soviet Union to describe remedies thought to increase the resistance of organisms to biological stress. Most of the studies conducted on adaptogens were performed in the Soviet Union, Korea, and China before the 1980s. The term was not accepted in pharmacological, physiological, or mainstream clinical practices in the European Union.
Sources
Compounds studied for putative adaptogenic properties are often derived from the following plants:
Eleutherococcus senticosus
Oplopanax elatus
Panax ginseng
Rhaponticum cartamoides
Rhodiola rosea
Schisandra chinensis |
https://en.wikipedia.org/wiki/Cuboid%20bone | In the human body, the cuboid bone is one of the seven tarsal bones of the foot.
Structure
The cuboid bone is the most lateral of the bones in the distal row of the tarsus. It is roughly cubical in shape, and presents a prominence in its inferior (or plantar) surface, the tuberosity of the cuboid. The bone provides a groove where the tendon of the peroneus longus muscle passes to reach its insertion in the first metatarsal and medial cuneiform bones.
Surfaces
The dorsal surface, directed upward and lateralward, is rough, for the attachment of ligaments.
The plantar surface presents in front a deep groove, the peroneal sulcus, which runs obliquely forward and medialward; it lodges the tendon of the peroneus longus, and is bounded behind by a prominent ridge, to which the long plantar ligament is attached.
The ridge ends laterally in an eminence, the tuberosity, the surface of which presents an oval facet; on this facet glides the sesamoid bone or cartilage frequently found in the tendon of the peroneus longus. The surface of bone behind the groove is rough, for the attachment of the plantar calcaneocuboid ligament, a few fibers of the flexor hallucis brevis, and a fasciculus from the tendon of the tibialis posterior.
The lateral surface presents a deep notch formed by the commencement of the peroneal sulcus.
The posterior surface is smooth, triangular, and concavo-convex, for articulation with the anterior surface of the calcaneus (the calcaneocuboid joint); its infero-medial angle projects backward as a process which underlies and supports the anterior end of the calcaneus.
The anterior surface, of smaller size, but also irregularly triangular, is divided by a vertical ridge into two facets, forming the fourth and fifth tarsometatarsal joints: the medial facet, quadrilateral in form, articulates with the fourth metatarsal; the lateral, larger and more triangular, articulates with the fifth.
The medial surface is broad, irregularly quadrilateral, and present |
https://en.wikipedia.org/wiki/Total%20refraction | Total refraction occurs when an incident wave on an interface between two media with opposite refractive index signs is completely transmitted. There is then no reflected wave. This can occur only when one of the two materials has a negative refractive index. Composite metamaterials with this unusual property were fabricated for the first time in 2002. This phenomenon is conditioned by the wave impedance matching between the two media.
Physical optics
Geometrical optics
ru:Преломление полное |
https://en.wikipedia.org/wiki/L%C3%A1szl%C3%B3%20Lov%C3%A1sz | László Lovász (; born March 9, 1948) is a Hungarian mathematician and professor emeritus at Eötvös Loránd University, best known for his work in combinatorics, for which he was awarded the 2021 Abel Prize jointly with Avi Wigderson. He was the president of the International Mathematical Union from 2007 to 2010 and the president of the Hungarian Academy of Sciences from 2014 to 2020.
In graph theory, Lovász's notable contributions include the proofs of Kneser's conjecture and the Lovász local lemma, as well as the formulation of the Erdős–Faber–Lovász conjecture. He is also one of the eponymous authors of the LLL lattice reduction algorithm.
Early life and education
Lovász was born on March 9, 1948, in Budapest, Hungary.
Lovász attended the Fazekas Mihály Gimnázium in Budapest. He won three gold medals (1964–1966) and one silver medal (1963) at the International Mathematical Olympiad. He also participated in a Hungarian game show about math prodigies. Paul Erdős helped introduce Lovász to graph theory at a young age.
Lovász received his Candidate of Sciences (C.Sc.) degree in 1970 at the Hungarian Academy of Sciences. His advisor was Tibor Gallai. He received his first doctorate (Dr.Rer.Nat.) degree from Eötvös Loránd University in 1971 and his second doctorate (Dr.Math.Sci.) from the Hungarian Academy of Sciences in 1977.
Career
From 1971 to 1975, Lovász worked at Eötvös Loránd University as a research associate. From 1975 to 1978, he was a docent at the University of Szeged, and then served as a professor and the Chair of Geometry there until 1982. He then returned to Eötvös Loránd University as a professor and the Chair of Computer Science until 1993.
Lovász was a professor at Yale University from 1993 to 1999, when he moved to the Microsoft Research Center where he worked as a senior researcher until 2006. He returned to Eötvös Loránd University where he was the director of the Mathematical Institute (2006–2011) and a professor in the Department of Compute |
https://en.wikipedia.org/wiki/Berkeley%20Networks | Berkeley Networks was a leading startup company that built intelligent switches targeted for the enterprise computer networking market segment.
The company was established in 1996. The name of the company comes from the school University of California, Berkeley. The founder and CEO, Dr. Ravi Sethi, received his Ph.D. and MBA from the University of California, Berkeley.
Berkeley Networks was acquired by Pittsburgh-based FORE Systems for US$250 million, and then which later was acquired by London-based GEC (now Marconi Corporation plc) for £2.8 Billion.
See also
Telecommunication
Communications protocols
External links
Intel Capital: The Berkeley Networks Investment
Fore Systems Agrees to Purchase Berkeley Networks
Defunct networking companies
Defunct computer companies of the United States |
https://en.wikipedia.org/wiki/M%C2%B7CORE | M·CORE is a low-power, RISC-based microcontroller architecture developed by Motorola (subsequently Freescale, now part of NXP), intended for use in embedded systems. Introduced in late 1997, the architecture combines a 32-bit internal data path with 16-bit instructions, and includes a four-stage instruction pipeline. Initial implementations used a 360nm process and ran at 50 MHz.
M·CORE processors employ a von Neumann architecture with shared program and data bus—executing instructions from within data memory is possible. Motorola engineers designed M·CORE to have low power consumption and high code density. |
https://en.wikipedia.org/wiki/High-frequency%20approximation | A high-frequency approximation (or "high energy approximation") for scattering or other wave propagation problems, in physics or engineering, is an approximation whose accuracy increases with the size of features on the scatterer or medium relative to the wavelength of the scattered particles.
Classical mechanics and geometric optics are the most common and extreme high frequency approximation, where the wave or field properties of, respectively, quantum mechanics and electromagnetism are neglected entirely.
Less extreme approximations include, the WKB approximation, physical optics, the geometric theory of diffraction, the uniform theory of diffraction, and the physical theory of diffraction. When these are used to approximate quantum mechanics, they are called semiclassical approximations.
See also
Electromagnetic modeling
Scattering
Scattering, absorption and radiative transfer (optics) |
https://en.wikipedia.org/wiki/Polyadic%20algebra | Polyadic algebras (more recently called Halmos algebras) are algebraic structures introduced by Paul Halmos. They are related to first-order logic analogous to the relationship between Boolean algebras and propositional logic (see Lindenbaum–Tarski algebra).
There are other ways to relate first-order logic to algebra, including Tarski's cylindric algebras (when equality is part of the logic) and Lawvere's functorial semantics (a categorical approach). |
https://en.wikipedia.org/wiki/Vijay%20Kumar%20%28roboticist%29 | Vijay Kumar (born 12 April 1962) is an Indian roboticist and UPS foundation professor in the School of Engineering & Applied Science with secondary appointments in computer and information science and electrical and systems engineering at the University of Pennsylvania, and became the new Dean of Penn Engineering on 1 July 2015.
Kumar is known for his research in the control and coordination of multi-robot formations.
He was elected to the American Philosophical Society in 2018.
Education
B.Tech., Mechanical Engineering, Indian Institute of Technology, Kanpur, India, May 1983
M.Sc., Mechanical Engineering, Ohio State University, Columbus, Ohio, March 1988
Ph.D., Mechanical Engineering, Ohio State University, Columbus, Ohio, September 1987
About his research work
Honours and awards
The Ohio State University Presidential Fellowship (1986)
NSF Presidential Young Investigator Award (1991)
Lindback Award for Distinguished Teaching, University of Pennsylvania (1996)
The Ferdinand Freudenstein Award for significant contributions to mechanisms and robotics awarded at the 5th National Conference on Mechanisms and Robotics (1997)
Best paper award, Distributed Autonomous Robotic Systems (2002)
Fellow, American Society of Mechanical Engineers (2003)
Kayamori Best Paper Award, IEEE International Conference on Robotics and Automation (2004)
IEEE Robotics and Automation Society Distinguished Lecturer (2005)
Fellow, Institute for Electrical and Electronics Engineers (2005)
IEEE Robotics and Automation Society Distinguished Award (2012)
George H. Heilmeier Faculty Award for Excellence in Research (2013)
Member, National Academy of Engineering (2013)
Popular Mechanics Breakthrough Award (2013)
IIT Kanpur Distinguished Alumnus Award 2013–14, for his outstanding contributions to the area of control and coordination of multi-robot formations.
The Joseph Engelberger Award by the Robotics Industries Association (2014)
IEEE Robotics and Automation Award (2020) |
https://en.wikipedia.org/wiki/Mycoplasma%20laboratorium | Mycoplasma laboratorium or Synthia refers to a synthetic strain of bacterium. The project to build the new bacterium has evolved since its inception. Initially the goal was to identify a minimal set of genes that are required to sustain life from the genome of Mycoplasma genitalium, and rebuild these genes synthetically to create a "new" organism. Mycoplasma genitalium was originally chosen as the basis for this project because at the time it had the smallest number of genes of all organisms analyzed. Later, the focus switched to Mycoplasma mycoides and took a more trial-and-error approach.
To identify the minimal genes required for life, each of the 482 genes of M. genitalium was individually deleted and the viability of the resulting mutants was tested. This resulted in the identification of a minimal set of 382 genes that theoretically should represent a minimal genome. In 2008 the full set of M. genitalium genes was constructed in the laboratory with watermarks added to identify the genes as synthetic. However M. genitalium grows extremely slowly and M. mycoides was chosen as the new focus to accelerate experiments aimed at determining the set of genes actually needed for growth.
In 2010, the complete genome of M. mycoides was successfully synthesized from a computer record and transplanted into an existing cell of Mycoplasma capricolum that had had its DNA removed. It is estimated that the synthetic genome used for this project cost US$40 million and 200 man-years to produce. The new bacterium was able to grow and was named JCVI-syn1.0, or Synthia. After additional experimentation to identify a smaller set of genes that could produce a functional organism, JCVI-syn3.0 was produced, containing 473 genes. 149 of these genes are of unknown function. Since the genome of JCVI-syn3.0 is novel, it is considered the first truly synthetic organism.
Minimal genome project
The production of Synthia is an effort in synthetic biology at the J. Craig Venter Institute |
https://en.wikipedia.org/wiki/Partial%20algebra | In abstract algebra, a partial algebra is a generalization of universal algebra to partial operations.
Example(s)
partial groupoid
field — the multiplicative inversion is the only proper partial operation
effect algebras
Structure
There is a "Meta Birkhoff Theorem" by Andreka, Nemeti and Sain (1982). |
https://en.wikipedia.org/wiki/Mamadou%20Gouro%20Sidibe | Mamadou Gouro Sidibe was born in Mali. He is an IT Engineer, computer scientist, innovator, and entrepreneur. He is the founder and developer of Lenali. Lenali is a social media application like Facebook, but uses voice-based technology in local African languages. He refers to himself as a Digital inclusive entrepreneur.
Career
Sidibe studied in Russia and France. He received a PhD in computer sciences from the University of Versailles in France. He worked ten years on research and development projects that was funded by the European Commission in computer networks and multimedia.
Sidibe started his own company in 2017 and developed the Lenali. The Lenali is a voice-based social network application that works with spoken language. It was initially developed into such languages as Bambara, Soninke, Songhay, Moore, Wolof, and French. The Lengai computer application is free.
Applications that Sidibe has developed are; Lenali, Gafe, and Kunko.
Lenali is a free vocal social media application (app) developed by Sidibe in 2017. It uses voice technology in local African languages and in French. It was developed for people who do not read or write. Features are voice tutorials, profiles, posts, picture posts, get voice instructions, create a profile, comments and GPS navigation calls. According to UNSCO Mali has a literacy rate of 40%. It allows smartphone users to communication using voice technology in their local languages.
Kunko is a computer application developed by Sidibe. It uses voice interaction in local languages to report COVID-19 suspected cases to contact institutions and authorized authorities. It uses sound, photos, voice mail, video posts, and a GPS navigator.
Gafe Digital is a standard and functional literacy application.
Sidibe is listed in the African Exponent 2018, Quartz Innovators list among the top 30 African Innovators. |
https://en.wikipedia.org/wiki/Hexspeak | Hexspeak, like leetspeak, is a novelty form of variant English spelling using the hexadecimal digits. Created by programmers as memorable magic numbers, hexspeak words can serve as a clear and unique identifier with which to mark memory or data.
Hexadecimal notation represents numbers using the 16 digits 0123456789ABCDEF. Using only the letters ABCDEF it is possible to spell several words. Further words can be made by treating some of the decimal numbers as letters - the digit "0" can represent the letter "O", and "1" can represent the letters "I" or "L". Less commonly, "5" can represent "S", "7" represent "T", "12" represent "R" and "6" or "9" can represent "G" or "g", respectively. Numbers such as 2, 4 or 8 can be used in a manner similar to leet or rebuses; e.g. the word "defecate" can be expressed either as DEFECA7E or DEFEC8.
Notable magic numbers
Many computer processors, operating systems, and debuggers make use of magic numbers, especially as a magic debug value.
Alternative letters
Many computer languages require that a hexadecimal number be marked with a prefix or suffix (or both) to identify it as a number. Sometimes the prefix or suffix is used as part of the word.
The C programming language uses the "0x" prefix to indicate a hexadecimal number, but the "0x" is usually ignored when people read such values as words. C also allows the suffix L to declare an integer as long, or LL to declare it as long long, making it possible to write "0xDEADCELL" (dead cell). In either case a U may also appear in the suffix to declare the integer as unsigned, making it possible to write "0xFEEDBULL" (feed bull).
In the (non-Unix) Intel assembly language, hexadecimal numbers are denoted by a "h" suffix, making it possible to write "0beach" (beach). Note that numbers in this notation that begin with a letter must be prefixed with a zero to distinguish them from variable names. A Unix-style assembler uses C language convention instead (but non-Unix-style assemblers |
https://en.wikipedia.org/wiki/Overconvergent%20modular%20form | In mathematics, overconvergent modular forms are special p-adic modular forms that are elements of certain p-adic Banach spaces (usually infinite dimensional)
containing classical spaces of modular forms as subspaces. They were introduced by Nicholas M. Katz in 1972. |
https://en.wikipedia.org/wiki/Proximity%20space | In topology, a proximity space, also called a nearness space, is an axiomatization of the intuitive notion of "nearness" that hold set-to-set, as opposed to the better known point-to-set notion that characterize topological spaces.
The concept was described by but ignored at the time. It was rediscovered and axiomatized by V. A. Efremovič in 1934 under the name of infinitesimal space, but not published until 1951. In the interim, discovered a version of the same concept under the name of separation space.
Definition
A is a set with a relation between subsets of satisfying the following properties:
For all subsets
implies
implies
implies
implies ( or )
(For all or ) implies
Proximity without the first axiom is called (but then Axioms 2 and 4 must be stated in a two-sided fashion).
If we say is near or and are ; otherwise we say and are . We say is a or of written if and only if and are apart.
The main properties of this set neighborhood relation, listed below, provide an alternative axiomatic characterization of proximity spaces.
For all subsets
implies
implies
( and ) implies
implies
implies that there exists some such that
A proximity space is called if implies
A or is one that preserves nearness, that is, given if in then in Equivalently, a map is proximal if the inverse map preserves proximal neighborhoodness. In the same notation, this means if holds in then holds in
Properties
Given a proximity space, one can define a topology by letting be a Kuratowski closure operator. If the proximity space is separated, the resulting topology is Hausdorff. Proximity maps will be continuous between the induced topologies.
The resulting topology is always completely regular. This can be proven by imitating the usual proofs of Urysohn's lemma, using the last property of proximal neighborhoods to create the infinite increasing chain used in proving the lemma.
Given a compact Hausdorff space, the |
https://en.wikipedia.org/wiki/Presbycusis | Presbycusis (also spelled presbyacusis, from Greek πρέσβυς presbys "old" + ἄκουσις akousis "hearing"), or age-related hearing loss, is the cumulative effect of aging on hearing. It is a progressive and irreversible bilateral symmetrical age-related sensorineural hearing loss resulting from degeneration of the cochlea or associated structures of the inner ear or auditory nerves. The hearing loss is most marked at higher frequencies. Hearing loss that accumulates with age but is caused by factors other than normal aging (nosocusis and sociocusis) is not presbycusis, although differentiating the individual effects of distinct causes of hearing loss can be difficult.
The cause of presbycusis is a combination of genetics, cumulative environmental exposures and pathophysiological changes related to aging. At present there are no preventive measures known; treatment is by hearing aid or surgical implant.
Presbycusis is the most common cause of hearing loss, affecting one out of three persons by age 65, and one out of two by age 75. Presbycusis is the second most common illness next to arthritis in aged people.
Many vertebrates such as fish, birds and amphibians do not experience presbycusis in old age as they are able to regenerate their cochlear sensory cells, whereas mammals including humans have genetically lost this regenerative ability.
Presentation
Primary symptoms:
sounds or speech becoming dull, muffled or attenuated
need for increased volume on television, radio, music and other audio sources
difficulty using the telephone
loss of directionality of sound
difficulty understanding speech, especially women and children
difficulty in speech discrimination against background noise (cocktail party effect)
Secondary symptoms:
hyperacusis, heightened sensitivity to certain volumes and frequencies of sound, resulting from "recruitment"
tinnitus, ringing, buzzing, hissing or other sounds in the ear when no external sound is present
Usually occurs after age 50, but dete |
https://en.wikipedia.org/wiki/Puzzle%20lock | A puzzle lock or puzzle padlock is a type of mechanical puzzle. It consists of a lock with unusual or hidden mechanics. Puzzle locks are reconfigurable mechanisms where the topological structure changes during the operation. Such locks are sometimes called trick locks, because there is a trick to opening them which needs to be found. Puzzle locks exist both with keys and without keys.
China
Puzzle locks with exposed keyholes were widely used in ancient China and can be very tricky to open.
There are three main types in China:
Locks with extra obstacle
Locks with indirect insertion
Multi-stage locks.
Europe
In Europe, many small puzzle padlocks had front plate with a face or mask. The padlocks were designed to secure small bags or pouches and could be found across Europe with the most around the Danubian provinces and Aquileia. They were often shaped like rings and may have been fitted around the mouth of a bag as a sort of tamper-proof seal. The earliest Roman puzzle locks date back to the 2nd century BCE.
In the 1850s in the UK, "puzzle lock" was synonymous with "letter lock" and used to denote a lettered combination lock.
See also
Puzzle box |
https://en.wikipedia.org/wiki/Alhazen%27s%20problem | Alhazen's problem, also known as Alhazen's billiard problem, is a mathematical problem in geometrical optics first formulated by Ptolemy in 150 AD.
It is named for the 11th-century Arab mathematician Alhazen (Ibn al-Haytham) who presented a geometric solution in his Book of Optics. The algebraic solution involves quartic equations and was found in 1965 by .
Geometric formulation
The problem comprises drawing lines from two points, meeting at a third point on the circumference of a circle and making equal angles with the normal at that point (specular reflection). Thus, its main application in optics is to solve the problem, "Find the point on a spherical convex mirror at which a ray of light coming from a given point must strike in order to be reflected to another point." This leads to an equation of the fourth degree.
( Alhazen himself never used this algebraic rewriting of the problem)
Alhazen's solution
Ibn al-Haytham solved the problem using conic sections and a geometric proof.
Algebraic solution
Later mathematicians such as Christiaan Huygens, James Gregory, Guillaume de l'Hôpital, Isaac Barrow, and many others, attempted to find an algebraic solution to the problem, using various methods, including analytic methods of geometry and derivation by complex numbers.
An algebraic solution to the problem was finally found first in 1965 by Jack M. Elkin (an actuarian), by means of a quartic polynomial.
Other solutions were rediscovered later:
in 1989, by Harald Riede;
in 1990 (submitted in 1988), by Miller and Vegh;
and in 1992, by John D. Smith
and also by Jörg Waldvogel.
In 1997, the Oxford mathematician Peter M. Neumann proved there is no ruler-and-compass construction for the general solution of Alhazen's problem
(although in 1965 Elkin had already provided a counterexample to Euclidean construction).
Generalization
Recently, Mitsubishi Electric Research Labs researchers solved the extension of Alhazen's problem to general rotationally symmetric quadric |
https://en.wikipedia.org/wiki/Skylake%20%28microarchitecture%29 | Skylake is Intel's codename for its sixth generation Core microprocessor family that was launched on August 5, 2015, succeeding the Broadwell microarchitecture. Skylake is a microarchitecture redesign using the same 14 nm manufacturing process technology as its predecessor, serving as a tock in Intel's tick–tock manufacturing and design model. According to Intel, the redesign brings greater CPU and GPU performance and reduced power consumption. Skylake CPUs share their microarchitecture with Kaby Lake, Coffee Lake, Cannon Lake, Whiskey Lake, and Comet Lake CPUs.
Skylake is the last Intel platform on which Windows earlier than Windows 10 will be officially supported by Microsoft, although enthusiast-created modifications exist that allow Windows 8.1 and earlier to continue to receive Windows Updates on later platforms.
Some of the processors based on the Skylake microarchitecture are marketed as 6th-generation Core.
Intel officially declared end of life and discontinued Skylake LGA 1151 CPUs on March 4, 2019.
Development history
Skylake's development, as with previous processors such as Banias, Dothan, Conroe, Sandy Bridge, and Ivy Bridge, was primarily undertaken by Intel Israel at its engineering research center in Haifa, Israel. The final design was largely an evolution of Haswell, with minor improvements to performance and several power-saving features being added. A major priority of Skylake's design was to design a microarchitecture for envelopes as low as 4.5W to embed within tablet computers and notebooks in addition to higher-power desktop computers and servers.
In September 2014, Intel announced the Skylake microarchitecture at the Intel Developer Forum in San Francisco, and that volume shipments of Skylake CPUs were scheduled for the second half of 2015. The Skylake development platform was announced to be available in Q1 2015. During the announcement, Intel also demonstrated two computers with desktop and mobile Skylake prototypes: the first was a |
https://en.wikipedia.org/wiki/Solsem%20cave | The Solsem cave is a cave lying to the southwest of the island of Leka in Leka municipality in Trøndelag, Norway. The cave is well known for its cave paintings, which were discovered in 1912. For a long time, they were the only known cave paintings in Norway.
To date, over twenty figures have been found painted on the cave walls. The main group consists of 13 human figures to the left of a large cross. The leading interpretation is that the images either depict a type of ritual dance performed by those who used the cave, or that the figures represent the people or powers that inhabited the rock.
Location and history of the find
The cave is the result of a fault in the rock, shaped by waves and small stones. The cave itself is only 40 meters deep but because of the twists and turns inside the cave, it is still dark inside. The cave opening is 3 meters wide and faces southwest, overlooking the hamlet of Solsem.
On May 3, 1912, the cave was visited by three young men from the village. They brought with them ropes, ladders and lanterns to explore inside. Archaeologist Theodor Petersen from the Trondheim University Museum visited the site in July of the same year. A more detailed survey was carried out by Petersen and architect Claus Hjelte in 1913. Petersen wrote the first account of the find in 1914, in a festschrift honouring his colleague Karl Ditlev Rygh.
The cave was the subject of further investigations in 1916 by the Swedish archaeologist Gustaf Hallström and 1935 by the Norwegian archaeologist Gutorm Gjessing. Professor Kalle Sognnes of the University Museum in Trondheim and Terje Norsted from the Norwegian Institute for Cultural Heritage Research (NIKU) have carried out research on the cave in more recent times.
The figures and other finds in the cave
There are 13-14 human figures in the main group, which is located to the left of a large cross. The figures are between 30 and 100 cm tall. One of these supposed human figures has alternatively been interp |
https://en.wikipedia.org/wiki/DH5-Alpha%20Cell | DH5-Alpha Cells are E. coli cells engineered by American biologist Douglas Hanahan to maximize transformation efficiency. They are defined by three mutations: recA1, endA1 which help plasmid insertion and lacZΔM15 which enables blue white screening. The cells are competent and often used with calcium chloride transformation to insert the desired plasmid. A study of four transformation methods and six bacteria strains showed that the most efficient one was the DH5 strain with the Hanahan method.
Mutations
The recA1 mutation is a single point mutation that replaces glycine 160 of the recA polypeptide with an aspartic acid residue in order to disable the activity of the recombinases and inactivate homologous recombination.
The endA1 mutation inactivates an intracellular endonuclease to prevent it from degrading the inserted plasmid. |
https://en.wikipedia.org/wiki/Granulopoiesis | Granulopoiesis (or granulocytopoiesis) is a part of haematopoiesis, that leads to the production of granulocytes. A granulocyte, also referred to as a polymorphonuclear leukocyte (PMN), is a type of white blood cell that has multi lobed nuclei, usually containing three lobes, and has a significant amount of cytoplasmic granules within the cell. Granulopoiesis takes place in the bone marrow. It leads to the production of three types of mature granulocytes: neutrophils (most abundant, making up to 60% of all white blood cells), eosinophils (up to 4%) and basophils (up to 1%).
Stages of granulocyte development
Granulopoiesis is often divided into two parts;
1) Granulocyte lineage determination and
2) Committed granulopoiesis.
Granulocyte lineage determination
Granulopoiesis, as well as the rest of haematopoiesis, begins from a haematopoietic stem cells. These are multipotent cells that reside in the bone marrow niche and have the ability to give rise to all haematopoietic cells, as well as the ability of self renewal. They give rise to either a common lymphoid progenitor (CLP, a progenitor for all lymphoid cells) or a common myeloid progenitor, CMP, an oligopotent progenitor cell, that gives rise to the myeloid part of the haematopoietic tree. The first stage of the myeloid lineage is a granulocyte - monocyte progenitor (GMP), still an oligopotent progenitor, which then develops into unipotent cells that will later on form a population of granulocytes, as well as a population of monocytes. The first unipotent cell in granulopoiesis is a myeloblast.
Committed granulopoiesis
Committed granulopoiesis consists of maturation stages of unipotent cells. The first cell that starts to resemble a granulocyte is a myeloblast. It is characterized by large oval nucleus that takes up most of the space in the cell and very little cytoplasm. The next developmental stage, a promyelocyte, still has a large oval nucleus, but there is more cytoplasm in the cell at this point, also |
https://en.wikipedia.org/wiki/SIGTRAN | SIGTRAN is the name, derived from signaling transport, of the former Internet Task Force (I) working group that produced specifications for a family of protocols that provide reliable datagram service and user layer adaptations for Signaling System and ISDN communications protocols. The SIGTRAN protocols are an extension of the SS7 protocol family, and they support the same application and call management paradigms as SS7. However, the SIGTRAN protocols use an Internet Protocol (IP) transport called Stream Control Transmission Protocol (SCTP), instead of TCP or UDP. Indeed, the most significant protocol defined by the SIGTRAN group is SCTP, which is used to carry PSTN signaling over IP.
The SIGTRAN group was significantly influenced by telecommunications engineers intent on using the new protocols for adapting IP networks to the PSTN with special regard to signaling applications. Recently, SCTP is finding applications beyond its original purpose wherever reliable datagram service is desired.
SIGTRAN has been published in RFC 2719, under the title Framework Architecture for Signaling Transport. RFC 2719 also defines the concept of a signaling gateway (SG), which converts Common Channel Signaling (CCS) messages from SS7 to SIGTRAN. Implemented in a variety of network elements including softswitches, the SG function can provide significant value to existing common channel signaling networks, leveraging investments associated with SS7 and delivering the cost/performance values associated with IP transport.
SIGTRAN protocols
The SIGTRAN family of protocols includes:
Stream Control Transmission Protocol (SCTP), RFC 2960, RFC 3873, RFC 4166, RFC 4960.
ISDN User Adaptation (IUA), RFC 4233, RFC 5133.
Message Transfer Part 2 (MTP) User Peer-to-Peer Adaptation Layer (M2PA), RFC 4165.
Message Transfer Part 2 User Adaptation Layer (M2UA), RFC 3331.
Message Transfer Part 3 User Adaptation Layer (M3UA), RFC 4666.
Signalling Connection Control Part (SCCP) User Adaptation (SUA |
https://en.wikipedia.org/wiki/Phosphatidic%20acid | Phosphatidic acids are anionic phospholipids important to cell signaling and direct activation of lipid-gated ion channels. Hydrolysis of phosphatidic acid gives rise to one molecule each of glycerol and phosphoric acid and two molecules of fatty acids. They constitute about 0.25% of phospholipids in the bilayer.
Structure
Phosphatidic acid consists of a glycerol backbone, with, in general, a saturated fatty acid bonded to carbon-1, an unsaturated fatty acid bonded to carbon-2, and a phosphate group bonded to carbon-3.
Formation and degradation
Besides de novo synthesis, PA can be formed in three ways:
By phospholipase D (PLD), via the hydrolysis of the P-O bond of phosphatidylcholine (PC) to produce PA and choline.
By the phosphorylation of diacylglycerol (DAG) by DAG kinase (DAGK).
By the acylation of lysophosphatidic acid by lysoPA-acyltransferase (LPAAT); this is the most common pathway.
The glycerol 3-phosphate pathway for de novo synthesis of PA is shown here:
In addition, PA can be converted into DAG by lipid phosphate phosphohydrolases (LPPs) or into lyso-PA by phospholipase A (PLA).
Roles in the cell
The role of PA in the cell can be divided into three categories:
PA is the precursor for the biosynthesis of many other lipids.
The physical properties of PA influence membrane curvature.
PA acts as a signaling lipid, recruiting cytosolic proteins to appropriate membranes (e.g., sphingosine kinase 1).
PA plays very important role in phototransduction in Drosophila.
PA is a lipid ligand that gates ion channels. See also lipid-gated ion channels.
The first three roles are not mutually exclusive. For example, PA may be involved in vesicle formation by promoting membrane curvature and by recruiting the proteins to carry out the much more energetically unfavourable task of neck formation and pinching.
Roles in biosynthesis
PA is a vital cell lipid that acts as a biosynthetic precursor for the formation (directly or indirectly) of all acylglycerol lipi |
https://en.wikipedia.org/wiki/Courant%E2%80%93Friedrichs%E2%80%93Lewy%20condition | In mathematics, the convergence condition by Courant–Friedrichs–Lewy is a necessary condition for convergence while solving certain partial differential equations (usually hyperbolic PDEs) numerically. It arises in the numerical analysis of explicit time integration schemes, when these are used for the numerical solution. As a consequence, the time step must be less than a certain time in many explicit time-marching computer simulations, otherwise the simulation produces incorrect results. The condition is named after Richard Courant, Kurt Friedrichs, and Hans Lewy who described it in their 1928 paper.
Heuristic description
The principle behind the condition is that, for example, if a wave is moving across a discrete spatial grid and we want to compute its amplitude at discrete time steps of equal duration, then this duration must be less than the time for the wave to travel to adjacent grid points. As a corollary, when the grid point separation is reduced, the upper limit for the time step also decreases. In essence, the numerical domain of dependence of any point in space and time (as determined by initial conditions and the parameters of the approximation scheme) must include the analytical domain of dependence (wherein the initial conditions have an effect on the exact value of the solution at that point) to assure that the scheme can access the information required to form the solution.
Statement
To make a reasonably formally precise statement of the condition, it is necessary to define the following quantities:
Spatial coordinate: one of the coordinates of the physical space in which the problem is posed
Spatial dimension of the problem: the number of spatial dimensions, i.e., the number of spatial coordinates of the physical space where the problem is posed. Typical values are , and .
Time: the coordinate, acting as a parameter, which describes the evolution of the system, distinct from the spatial coordinates
The spatial coordinates and the time are disc |
https://en.wikipedia.org/wiki/Mycoestrogen | Mycoestrogens are xenoestrogens produced by fungi. They are sometimes referred to as mycotoxins. Among important mycoestrogens are zearalenone, zearalenol and zearalanol. Although all of these can be produced by various Fusarium species, zearalenol and zearalanol may also be produced endogenously in ruminants that have ingested zearalenone. Alpha-zearalanol is also produced semisynthetically, for veterinary use; such use is prohibited in the European Union.
Mechanism of action
Mycoestrogens act as agonists of the estrogen receptors, ERα and ERβ.
Sources
Mycoestrogens are produced by various strains of fungi, many of which fall under the genus Fusarium. Fusarium fungi are filamentous fungi that are found in the soil and are associated with plants and some crops, especially cereals. Zearalenone is mainly produced by F. graminearum and F. culmorum strains, which inhabit different areas depending on temperature and humidity. F. graminearum prefers to inhabit warmer and more humid locations such as Eastern Europe, Northern America, Eastern Australia, and Southern China in comparison to F. colmorum which is found in colder Western Europe.
Health effects
Mycoestrogens mimic natural estrogen in the body by acting as estrogen receptor (ER) ligands. Mycoestrogens have been identified as endocrine disruptors due to their high binding affinity for ERα and ERβ, exceeding that of well known antagonists such as bisphenol A and DDT. Studies have been performed that strongly suggest a relationship between detectable levels of mycoestrogen and growth and pubertal development. More than one study has shown that detectable levels of zearalenone and its metabolite alpha-zearalanol in girls are associated with significantly shorter heights at menarche. Other reports have documented premature onset of puberty in girls. Estrogen are known to cause decreased body weight in model animals, and the same effect has been seen in rats exposed to zearalenone. Interactions of ZEN and its met |
https://en.wikipedia.org/wiki/Gene%20co-expression%20network | A gene co-expression network (GCN) is an undirected graph, where each node corresponds to a gene, and a pair of nodes is connected with an edge if there is a significant co-expression relationship between them. Having gene expression profiles of a number of genes for several samples or experimental conditions, a gene co-expression network can be constructed by looking for pairs of genes which show a similar expression pattern across samples, since the transcript levels of two co-expressed genes rise and fall together across samples. Gene co-expression networks are of biological interest since co-expressed genes are controlled by the same transcriptional regulatory program, functionally related, or members of the same pathway or protein complex.
The direction and type of co-expression relationships are not determined in gene co-expression networks; whereas in a gene regulatory network (GRN) a directed edge connects two genes, representing a biochemical process such as a reaction, transformation, interaction, activation or inhibition. Compared to a GRN, a GCN does not attempt to infer the causality relationships between genes and in a GCN the edges represent only a correlation or dependency relationship among genes. Modules or the highly connected subgraphs in gene co-expression networks correspond to clusters of genes that have a similar function or involve in a common biological process which causes many interactions among themselves.
Gene co-expression networks are usually constructed using datasets generated by high-throughput gene expression profiling technologies such as Microarray or RNA-Seq. Recently, co-expression networks are used to analyze single cell RNA-Seq data, in order to better characterize the gene to gene relations in a cohort of cells from a specific cell type.
History
The concept of gene co-expression networks was first introduced by Butte and Kohane in 1999 as relevance networks. They gathered the measurement data of medical laboratory tests |
https://en.wikipedia.org/wiki/Human%20maximisation%20test | The Human maximisation test (HMT) is a test method for testing for contact allergens. It was first developed by Albert Kligman in 1966 and updated by Kligman and William Epstein in 1975. The first paper appeared 1966 and was a citation classic in 1985.
The test uses human medical volunteers (usually 25) and sodium laureth sulphate to maximise. Because of the potentially large human reaction, it is generally not considered ethical to use today. It does not have a guideline under the OECD Guidelines for the Testing of Chemicals. It has been compared with the murine local lymph node assay
See also
Patch test |
https://en.wikipedia.org/wiki/Lithium%20burning | Lithium burning is a nucleosynthetic process in which lithium is depleted in a star. Lithium is generally present in brown dwarfs and not in older low-mass stars. Stars, which by definition must achieve the high temperature (2.5 × 106 K) necessary for fusing hydrogen, rapidly deplete their lithium.
7Li
Burning of the most abundant isotope of lithium, lithium-7, occurs by a collision of 7Li and a proton producing beryllium-8, which promptly decays into two helium-4 nuclei. The temperature necessary for this reaction is just below the temperature necessary for hydrogen fusion. Convection in low-mass stars ensures that lithium in the whole volume of the star is depleted. Therefore, the presence of the lithium line in a candidate brown dwarf's spectrum is a strong indicator that it is indeed substellar.
6Li
From a study of lithium abundances in 53 T Tauri stars, it has been found that lithium depletion varies strongly with size, suggesting that lithium burning by the P-P chain, during the last highly convective and unstable stages during the pre–main sequence later phase of the Hayashi contraction may be one of the main sources of energy for T Tauri stars. Rapid rotation tends to improve mixing and increase the transport of lithium into deeper layers where it is destroyed. T Tauri stars generally increase their rotation rates as they age, through contraction and spin-up, as they conserve angular momentum. This causes an increased rate of lithium loss with age. Lithium burning will also increase with higher temperatures and mass, and will last for at most a little over 100 million years.
The P-P chain for lithium burning is as follows
{| border="0"
|- style="height:2em;"
| || + || || → || || || || ||
|- style="height:2em;"
| || || || || || + || → || || +
|- style="height:2em;"
| || + || || → || || || || ||
|- style="height:2em;"
| || || || || || || → 2× || || + energy
|}
It will not occur in stars less than |
https://en.wikipedia.org/wiki/Genealogical%20numbering%20systems | Several genealogical numbering systems have been widely adopted for presenting family trees and pedigree charts in text format.
Ascending numbering systems
Ahnentafel
Ahnentafel, also known as the Eytzinger Method, Sosa Method, and Sosa-Stradonitz Method, allows for the numbering of ancestors beginning with a descendant. This system allows one to derive an ancestor's number without compiling the complete list, and allows one to derive an ancestor's relationship based on their number. The number of a person's father is twice their own number, and the number of a person's mother is twice their own, plus one. For instance, if John Smith is 10, his father is 20, and his mother is 21.
In order to readily have the generation stated for a certain person, the Ahnentafel numbering may be preceded by the generation. This method's usefulness becomes apparent when applied further back in the generations: e.g. 08-146, is a male preceding the subject by 7 (8-1) generations. This ancestor was the father of a woman (146/2=73) (in the genealogical line of the subject), who was the mother of a man (73/2=36.5), further down the line the father of a man (36/2=18), father of a woman (18/2=9), mother of a man (9/2=4.5), father of the subject's father (4/2=2). Hence, 08-146 is the subject's father's father's mother's father's father's mother's father.
The atree or Binary Ahnentafel method is based on the same numbering of nodes, but first converts the numbers to binary notation and then converts each 0 to M (for Male) and each 1 to F (for Female). The first character of each code (shown as X in the table below) is M if the subject is male and F if the subject is female. For example 5 becomes 101 and then FMF (or MMF if the subject is male). An advantage of this system is easier understanding of the genealogical path.
The first 15 codes in each system, identifying individuals in four generations, are as follows:
Surname methods
Genealogical writers sometimes choose to present ancest |
https://en.wikipedia.org/wiki/SelectaVision | SelectaVision was a trademark name used on four classes of device by RCA:
The Holotape, a prototype video medium
Magnetic tape
VHS videocassette recorders, and
Capacitance Electronic Disc videodisc players and the discs themselves, introduced in 1981.
Capacitance Electronic Disc's competitors, Philips/Magnavox and Pioneer, instead manufactured optical discs, read with lasers.On April 4, 1984, RCA, having sold only 550,000 players, ended sales, losing $580 million. The losses resulted in General Electric's acquisition of RCA in 1986, and the "SelectaVision" brand was abandoned.
See also
Video High Density (JVC, 1970)
Electronic Video Recording (CBS, 1967)
Phonovision (Baird, 1928)
Vitascan (DuMont, 1949) |
https://en.wikipedia.org/wiki/Nitrite | The nitrite ion has the chemical formula . Nitrite (mostly sodium nitrite) is widely used throughout chemical and pharmaceutical industries. The nitrite anion is a pervasive intermediate in the nitrogen cycle in nature. The name nitrite also refers to organic compounds having the –ONO group, which are esters of nitrous acid.
Production
Sodium nitrite is made industrially by passing a mixture of nitrogen oxides into aqueous sodium hydroxide or sodium carbonate solution:
The product is purified by recrystallization. Alkali metal nitrites are thermally stable up to and beyond their melting point (441 °C for KNO2). Ammonium nitrite can be made from dinitrogen trioxide, N2O3, which is formally the anhydride of nitrous acid:
2 NH3 + H2O + N2O3 → 2 NH4NO2
Structure
The nitrite ion has a symmetrical structure (C2v symmetry), with both N–O bonds having equal length and a bond angle of about 115°. In valence bond theory, it is described as a resonance hybrid with equal contributions from two canonical forms that are mirror images of each other. In molecular orbital theory, there is a sigma bond between each oxygen atom and the nitrogen atom, and a delocalized pi bond made from the p orbitals on nitrogen and oxygen atoms which is perpendicular to the plane of the molecule. The negative charge of the ion is equally distributed on the two oxygen atoms. Both nitrogen and oxygen atoms carry a lone pair of electrons. Therefore, the nitrite ion is a Lewis base.
In the gas phase it exists predominantly as a trans-planar molecule.
Reactions
Acid-base properties
Nitrite is the conjugate base of the weak acid nitrous acid:
HNO2 H+ + ; pKa ≈ 3.3 at 18 °C
Nitrous acid is also highly volatile, tending to disproportionate:
3 HNO2 (aq) H3O+ + + 2 NO
This reaction is slow at 0 °C. Addition of acid to a solution of a nitrite in the presence of a reducing agent, such as iron(II), is a way to make nitric oxide (NO) in the laboratory.
Oxidation and reduction
The formal oxidation sta |
https://en.wikipedia.org/wiki/System%20of%20differential%20equations | In mathematics, a system of differential equations is a finite set of differential equations. Such a system can be either linear or non-linear. Also, such a system can be either a system of ordinary differential equations or a system of partial differential equations.
Linear system of differential equations
Like any system of equations, a system of linear differential equations is said to be overdetermined if there are more equations than the unknowns.
For an overdetermined system to have a solution, it needs to satisfy the compatibility conditions. For example, consider the system:
Then the necessary conditions for the system to have a solution are:
See also: Cauchy problem and Ehrenpreis's fundamental principle.
Non-linear system of differential equations
Perhaps the most famous example of a non-linear system of differential equations is the Navier–Stokes equations. Unlike the linear case, the existence of a solution of a non-linear system is a difficult problem (cf. Navier–Stokes existence and smoothness.)
See also: h-principle.
Differential system
A differential system is a means of studying a system of partial differential equations using geometric ideas such as differential forms and vector fields.
For example, the compatibility conditions of an overdetermined system of differential equations can be succinctly stated in terms of differential forms (i.e., a form to be exact, it needs to be closed). See integrability conditions for differential systems for more.
See also: :Category:differential systems.
Notes
See also
Integral geometry
Cartan–Kuranishi prolongation theorem |
https://en.wikipedia.org/wiki/Time-of-flight%20detector | A time-of-flight (TOF) detector is a particle detector which can discriminate between a lighter and a heavier elementary particle of same momentum using their time of flight between two scintillators. The first of the scintillators activates a clock upon being hit while the other stops the clock upon being hit. If the two masses are denoted by and and have velocities and then the time of flight difference is given by
where is the distance between the scintillators. The approximation is in the relativistic limit at momentum and denotes the speed of light in vacuum.
See also
Time-of-flight mass spectrometry
Particle detectors |
https://en.wikipedia.org/wiki/Disk%20buffer | In computer storage, disk buffer (often ambiguously called disk cache or cache buffer) is the embedded memory in a hard disk drive (HDD) or solid state drive (SSD) acting as a buffer between the rest of the computer and the physical hard disk platter or flash memory that is used for storage. Modern hard disk drives come with 8 to 256 MiB of such memory, and solid-state drives come with up to 4 GB of cache memory.
Since the late 1980s, nearly all disks sold have embedded microcontrollers and either an ATA, Serial ATA, SCSI, or Fibre Channel interface. The drive circuitry usually has a small amount of memory, used to store the data going to and coming from the disk platters.
The disk buffer is physically distinct from and is used differently from the page cache typically kept by the operating system in the computer's main memory. The disk buffer is controlled by the microcontroller in the hard disk drive, and the page cache is controlled by the computer to which that disk is attached. The disk buffer is usually quite small, ranging between 8 MB to 4 GB, and the page cache is generally all unused main memory. While data in the page cache is reused multiple times, the data in the disk buffer is rarely reused. In this sense, the terms disk cache and cache buffer are misnomers; the embedded controller's memory is more appropriately called disk buffer.
Note that disk array controllers, as opposed to disk controllers, usually have normal cache memory of around 0.5–8 GiB.
Uses
Read-ahead/read-behind
When a disk's controller executes a physical read, the actuator moves the read/write head to (or near to) the correct cylinder. After some settling and possibly fine-actuating the read head begins to pick up track data, and all is left to do is wait until platter rotation brings the requested data.
The data read ahead of request during this wait is unrequested but free, so typically saved in the disk buffer in case it is requested later.
Similarly, data can be read for fre |
https://en.wikipedia.org/wiki/Gallai%E2%80%93Hasse%E2%80%93Roy%E2%80%93Vitaver%20theorem | In graph theory, the Gallai–Hasse–Roy–Vitaver theorem is a form of duality between the colorings of the vertices of a given undirected graph and the orientations of its edges. It states that the minimum number of colors needed to properly color any graph equals one plus the length of a longest path in an orientation of chosen to minimize this path's The orientations for which the longest path has minimum length always include at least one
This theorem implies that every orientation of a graph with contains a simple directed path with this path can be constrained to begin at any vertex that can reach all other vertices of the oriented
Examples
A bipartite graph may be oriented from one side of the bipartition to the other. The longest path in this orientation has length one, with only two vertices. Conversely, if a graph is oriented without any three-vertex paths, then every vertex must either be a source (with no incoming edges) or a sink (with no outgoing edges) and the partition of the vertices into sources and sinks shows that it is
In any orientation of a cycle graph of odd length, it is not possible for the edges to alternate in orientation all around the cycle, so some two consecutive edges must form a path with three vertices. Correspondingly, the chromatic number of an odd cycle is three.
Proof
To prove that the chromatic number is greater than or equal to the minimum number of vertices in a longest path, suppose that a given graph has a coloring with colors, for some number . Then it may be acyclically oriented by numbering colors and by directing each edge from its lower-numbered endpoint to the higher-numbered endpoint. With this orientation, the numbers are strictly increasing along each directed path, so each path can include at most one vertex of each color, for a total of at most vertices per
To prove that the chromatic number is less than or equal to the minimum number of vertices in a longest path, suppose that a given graph has an or |
https://en.wikipedia.org/wiki/Slit%20%28protein%29 | Slit is a family of secreted extracellular matrix proteins which play an important signalling role in the neural development of most bilaterians (animals with bilateral symmetry). While lower animal species, including insects and nematode worms, possess a single Slit gene, humans, mice and other vertebrates possess three Slit homologs: Slit1, Slit2 and Slit3. Human Slits have been shown to be involved in certain pathological conditions, such as cancer and inflammation.
The ventral midline of the central nervous system is a key place where axons can either decide to cross and laterally project or stay on the same side of the brain. The main function of Slit proteins is to act as midline repellents, preventing the crossing of longitudinal axons through the midline of the central nervous system of most bilaterian animal species, including mice, chickens, humans, insects, nematode worms and planarians. It also prevents the recrossing of commissural axons. Its canonical receptor is Robo but it may have other receptors. The Slit protein is produced and secreted by cells within the floor plate (in vertebrates) or by midline glia (in insects) and diffuses outward. Slit/Robo signaling is important in pioneer axon guidance.
Discovery
Slit mutations were first discovered in the Nuesslein-Volhard/Wieschaus patterning screen where they were seen to affect the external midline structures in the embryos of Drosophila melanogaster, also known as the common fruit fly. In this experiment, researchers screened for different mutations in D. melanogaster embryos that affected the neural development of axons in the central nervous system. They found that the mutations in commissureless genes (Slit genes) lead to the growth cones that typically cross the midline remaining on their own side. The findings from this screening suggest that Slit genes are responsible for repulsive signaling along the neuronal midline.
Structure
Slit1, Slit2, and Slit3 each have the same basic structure. |
https://en.wikipedia.org/wiki/GameSpy | GameSpy was an American provider of online multiplayer and matchmaking middleware for video games founded in 1999 by Mark Surfas. After the release of a multiplayer server browser for the game, QSpy, Surfas licensed the software under the GameSpy brand to other video game publishers through a newly established company, GameSpy Industries, which also incorporated his Planet Network of video game news and information websites, and GameSpy.com.
GameSpy merged with IGN in 2004; by 2014, its services had been used by over 800 video game publishers and developers since its launch. In August 2012, the GameSpy Industries division (which remained responsible for the GameSpy service) was acquired by mobile video game developer Glu Mobile. IGN (then owned by News Corporation) retained ownership of the GameSpy.com website. In February 2013, IGN's new owner, Ziff Davis, shut down IGN's "secondary" sites, including GameSpy's network. This was followed by the announcement in April 2014 that GameSpy's service platform would be shut down on May 31, 2014.
History
The 1996 release of id Software's video game Quake, one of the first 3D multiplayer action games to allow play over the Internet, furthered the concept of players creating and releasing "mods" or modifications of games. Mark Surfas saw the need for hosting and distribution of these mods and created PlanetQuake, a Quake-related hosting and news site. The massive success of mods catapulted PlanetQuake to huge traffic and a central position in the burgeoning game website scene.
Quake also marked the beginning of the Internet multiplayer real-time action game scene. However, finding a Quake server on the Internet proved difficult, as players could only share IP addresses of known servers between themselves or post them on websites. To solve this problem, a team of three programmers (consisting of Joe "QSpy" Powell, Tim Cook, and Jack "morbid" Matthews) formed Spy Software and created QSpy (or QuakeSpy). This allowed the list |
https://en.wikipedia.org/wiki/Mueller%20calculus | Mueller calculus is a matrix method for manipulating Stokes vectors, which represent the polarization of light. It was developed in 1943 by Hans Mueller. In this technique, the effect of a particular optical element is represented by a Mueller matrix—a 4×4 matrix that is an overlapping generalization of the Jones matrix.
Introduction
Disregarding coherent wave superposition, any fully polarized, partially polarized, or unpolarized state of light can be represented by a Stokes vector ; and any optical element can be represented by a Mueller matrix (M).
If a beam of light is initially in the state and then passes through an optical element M and comes out in a state , then it is written
If a beam of light passes through optical element M1 followed by M2 then M3 it is written
given that matrix multiplication is associative it can be written
Matrix multiplication is not commutative, so in general
Mueller vs. Jones calculi
With disregard for coherence, light which is unpolarized or partially polarized must be treated using the Mueller calculus, while fully polarized light can be treated with either the Mueller calculus or the simpler Jones calculus. Many problems involving coherent light (such as from a laser) must be treated with Jones calculus, however, because it works directly with the electric field of the light rather than with its intensity or power, and thereby retains information about the phase of the waves.
More specifically, the following can be said about Mueller matrices and Jones matrices:
Stokes vectors and Mueller matrices operate on intensities and their differences, i.e. incoherent superpositions of light; they are not adequate to describe either interference or diffraction effects.
(...)
Any Jones matrix [J] can be transformed into the corresponding Mueller–Jones matrix, M, using the following relation:
,
where * indicates the complex conjugate [sic], [A is:]
and ⊗ is the tensor (Kronecker) product.
(...)
While the Jones matrix has eight i |
https://en.wikipedia.org/wiki/List%20of%20nonlinear%20ordinary%20differential%20equations | See also List of nonlinear partial differential equations and List of linear ordinary differential equations.
A–F
{|class="wikitable" style="background: white; color: black; text-align: left"
|-style="background: #eee"
!Name
!Order
!Equation
!Applications
|-
|Abel's differential equation of the first kind
|1
|
|Mathematics
|-
|Abel's differential equation of the second kind
|1
|
|Mathematics
|-
|Bellman's equation or Emden-Fowler's equation
|2
|
|Mathematics
|-
|Bernoulli equation
|1
|
|Mathematics
|-
|Besant-Rayleigh-Plesset equation
|2
|
|Fluid dynamics
|-
|Blasius equation
|3
|
|Blasius boundary layer
|-
|Chandrasekhar equation
|2
|
|Astrophysics
|-
|Chandrasekhar's white dwarf equation
|2
|
|Astrophysics
|-
|Chrystal's equation
|1
|
|Mathematics
|-
|Clairaut's equation
|1
|
|Mathematics
|-
|D'Alembert's equation
|1
|
|Mathematics
|-
|Darboux equation
|1
|
|Mathematics
|-
|De Boer-Ludford equation
|2
|
|Plasma physics
|-
|Duffing equation
|2
|
|Oscillators
|-
|Emden equation
|2
|
|Astrophysics
|-
|Euler's differential equation
|1
|
|Mathematics
|-
|Falkner–Skan equation
|3
|
|Falkner–Skan boundary layer
|}
G–K
{|class="wikitable" style="background: white; color: black; text-align: left"
|-style="background: #eee"
!Name
!Order
!Equation
!Applications
|-
|Ivey's equation
|2
|
|
|-
|Jacobi's differential equation
|1
|
|Mathematics
|-
|Kidder equation
|2
|
|Flow through porous medium
|-
|Krogdahl equation
|2
|
|Stellar pulsation
|}
L–Q
{|class="wikitable" style="background: white; color: black; text-align: left"
|-style="background: #eee"
!Name
!Order
!Equation
!Applications
|-
|Lane–Emden equation
|2
|
|Astrophysics
|-
|Langmuir equation
|2
|
|Environmental Engineering
|-
|Langmuir-Blodgett equation
|2
|
|
|-
|Langmuir-Boguslavski equation
|2
|
|
|-
|Liñán's equation
|2
|
|Combustion
|-
|Painlevé I transcendent
|2
|
|Mathematics
|-
|Painlevé II transcendent
|2
|
|Mathematics
|-
|Painlevé III transcendent
|2
|
|Mathematics
|-
|Painlevé |
https://en.wikipedia.org/wiki/Dolan%20DNA%20Learning%20Center | The DNA Learning Center (DNALC) is a genetics learning center affiliated with the Cold Spring Harbor Laboratory, in Cold Spring Harbor, New York. It is the world's first science center devoted entirely to genetics education and offers online education, class field trips, student summer day camps, and teacher training. The DNALC's family of internet sites cover broad topics including basic heredity, genetic disorders, eugenics, the discovery of the structure of DNA, DNA sequencing, cancer, neuroscience, and plant genetics.
The center developed a website called DNA Subway for the iPlant Collaborative.
See also
National Centre for Biotechnology Education, UK |
https://en.wikipedia.org/wiki/Vector%20%28molecular%20biology%29 | In molecular cloning, a vector is any particle (e.g., plasmids, cosmids, Lambda phages) used as a vehicle to artificially carry a foreign nucleic sequence – usually DNA – into another cell, where it can be replicated and/or expressed. A vector containing foreign DNA is termed recombinant DNA. The four major types of vectors are plasmids, viral vectors, cosmids, and artificial chromosomes. Of these, the most commonly used vectors are plasmids. Common to all engineered vectors are an origin of replication, a multicloning site, and a selectable marker.
The vector itself generally carries a DNA sequence that consists of an insert (in this case the transgene) and a larger sequence that serves as the "backbone" of the vector. The purpose of a vector which transfers genetic information to another cell is typically to isolate, multiply, or express the insert in the target cell. All vectors may be used for cloning and are therefore cloning vectors, but there are also vectors designed specially for cloning, while others may be designed specifically for other purposes, such as transcription and protein expression. Vectors designed specifically for the expression of the transgene in the target cell are called expression vectors, and generally have a promoter sequence that drives expression of the transgene. Simpler vectors called transcription vectors are only capable of being transcribed but not translated: they can be replicated in a target cell but not expressed, unlike expression vectors. Transcription vectors are used to amplify their insert.
The manipulation of DNA is normally conducted on E. coli vectors, which contain elements necessary for their maintenance in E. coli. However, vectors may also have elements that allow them to be maintained in another organism such as yeast, plant or mammalian cells, and these vectors are called shuttle vectors. Such vectors have bacterial or viral elements which may be transferred to the non-bacterial host organism, however othe |
https://en.wikipedia.org/wiki/Non-B%20database | Non-B DB is a database integrating annotations and analysis of non-B DNA-forming sequence motifs. The database provides alternative DNA structure predictions including Z-DNA motifs, quadruplex-forming motifs, inverted repeats, mirror repeats and direct repeats and their associated subsets of cruciforms, triplex and slipped structures, respectively.
See also
B-DNA
non-B DNA |
https://en.wikipedia.org/wiki/Motivational%20salience | Motivational salience is a cognitive process and a form of attention that motivates or propels an individual's behavior towards or away from a particular object, perceived event or outcome. Motivational salience regulates the intensity of behaviors that facilitate the attainment of a particular goal, the amount of time and energy that an individual is willing to expend to attain a particular goal, and the amount of risk that an individual is willing to accept while working to attain a particular goal.
Motivational salience is composed of two component processes that are defined by their attractive or aversive effects on an individual's behavior relative to a particular stimulus: incentive salience and aversive salience. Incentive salience is the attractive form of motivational salience that causes approach behavior, and is associated with operant reinforcement, desirable outcomes, and pleasurable stimuli. Aversive salience is the aversive form of motivational salience that causes avoidance behavior, and is associated with operant punishment, undesirable outcomes, and unpleasant stimuli.
Incentive salience
Incentive salience is a cognitive process that grants a "desire" or "want" attribute, which includes a motivational component to a rewarding stimulus. Reward is the attractive and motivational property of a stimulus that induces appetitive behavior – also known as approach behavior – and consummatory behavior. The "wanting" of incentive salience differs from "liking" in the sense that liking is the pleasure that is immediately gained from the acquisition or consumption of a rewarding stimulus; the "wanting" of incentive salience serves a "motivational magnet" quality of a rewarding stimulus that makes it a desirable and attractive goal, transforming it from a mere sensory experience into something that commands attention, induces approach, and causes it to be sought out.
Incentive salience is regulated by a number of brain structures, but it is assigned to stim |
https://en.wikipedia.org/wiki/Dolfan%20Denny | Dolfan Denny was the name given to Denny Sym by Miami Dolphins football fans. He was known for cheering on the NFL team for 33 years as a one-man sideline show, often leading Miami crowds in cheers and chants from each corner of the field. He usually wore glittering orange and aqua hats, and did so since the Dolphins' first game in 1966 until 2000. In 1976, the Dolphins began paying him $50 per game to cheer from the sideline after being impressed by his spirit and passion. He then retired his act in 2000, at the age of 65, after suffering chronic health problems. At one point, he had to cheer while seated due to knee problems.
Death
Denny Sym died on March 18, 2007, of kidney disease and cancer at the age of 72. |
https://en.wikipedia.org/wiki/L.%20E.%20J.%20Brouwer | Luitzen Egbertus Jan Brouwer (; ; 27 February 1881 – 2 December 1966), usually cited as L. E. J. Brouwer but known to his friends as Bertus, was a Dutch mathematician and philosopher who worked in topology, set theory, measure theory and complex analysis. Regarded as one of the greatest mathematicians of the 20th century, he is known as the founder of modern topology, particularly for establishing his fixed-point theorem and the topological invariance of dimension.
Brouwer also became a major figure in the philosophy of intuitionism, a constructivist school of mathematics which argues that math is a cognitive construct rather than a type of objective truth. This position led to the Brouwer–Hilbert controversy, in which Brouwer sparred with his formalist colleague David Hilbert. Brouwer's ideas were subsequently taken up by his student Arend Heyting and Hilbert's former student Hermann Weyl. In addition to his mathematical work, Brouwer also published the short philosophical tract Life, Art, and Mysticism (1905).
Biography
Brouwer was born to Dutch Protestant parents. Early in his career, Brouwer proved a number of theorems in the emerging field of topology. The most important were his fixed point theorem, the topological invariance of degree, and the topological invariance of dimension. Among mathematicians generally, the best known is the first one, usually referred to now as the Brouwer fixed point theorem. It is a corollary to the second, concerning the topological invariance of degree, which is the best known among algebraic topologists. The third theorem is perhaps the hardest.
Brouwer also proved the simplicial approximation theorem in the foundations of algebraic topology, which justifies the reduction to combinatorial terms, after sufficient subdivision of simplicial complexes, of the treatment of general continuous mappings. In 1912, at age 31, he was elected a member of the Royal Netherlands Academy of Arts and Sciences. He was an Invited Speaker of the |
https://en.wikipedia.org/wiki/Preordered%20class | In mathematics, a preordered class is a class equipped with a preorder.
Definition
When dealing with a class C, it is possible to define a class relation on C as a subclass of the power class C C . Then, it is convenient to use the language of relations on a set.
A preordered class is a class with a preorder on it. Partially ordered class and totally ordered class are defined in a similar way. These concepts generalize respectively those of preordered set, partially ordered set and totally ordered set. However, it is difficult to work with them as in the small case because many constructions common in a set theory are no longer possible in this framework.
Equivalently, a preordered class is a thin category, that is, a category with at most one morphism from an object to another.
Examples
In any category C, when D is a class of morphisms of C containing identities and closed under composition, the relation 'there exists a D-morphism from X to Y is a preorder on the class of objects of C.
The class Ord' of all ordinals is a totally ordered class with the classical ordering of ordinals. |
https://en.wikipedia.org/wiki/Personal%20NetWare | NetWare Lite and Personal NetWare are a series of discontinued peer-to-peer local area networks developed by Novell for DOS- and Windows-based personal computers aimed at personal users and small businesses in the 1990s.
NetWare Lite
In 1991, Novell introduced a radically different and cheaper product from their central server-based NetWare product, NetWare Lite 1.0 (NWL), codenamed "Slurpee", in answer to Artisoft's similar LANtastic. Both were peer-to-peer systems, where no dedicated server was required, but instead all PCs on the network could share their resources.
Netware Lite contained a unique serial number in the EXE files that prevented running the same copy on multiple nodes within a single network. This basic copy protection was easily circumvented by comparing files from different licenses and accordingly editing the serial number bytes.
The product was upgraded to NetWare Lite 1.1 and also came bundled with DR DOS 6.0. Some components of NetWare Lite were used in Novell's NetWare PalmDOS 1.0 in 1992.
A Japanese version of NetWare Lite named "NetWare Lite 1.1J" existed in 1992 for four platforms (DOS/V, Fujitsu FM-R, NEC PC-98/Epson PC and Toshiba J-3100) and was supported up to 1997. Updates were distributed by Novell as DOSV6.EXE, DOSV.EXE, TSBODI.LZH.
NetWare Lite 1.1 came bundled with NLSNIPES, a newer implementation of Novell's Snipes game.
Personal NetWare
Significantly reworked, the product line, codenamed "Smirnoff", became Personal NetWare 1.0 (PNW) in 1994. The ODI/VLM 16-bit DOS client portion of the drivers now supported individually loadable Virtual Loadable Modules (VLMs) for an improved flexibility and customizability, whereas the server portion could utilize Novell's DOS Protected Mode Services (DPMS), if loaded, to reduce its conventional memory footprint and run in extended memory and protected mode. The NetWare Lite disk cache NLCACHE had been reworked into NWCACHE, which was easier to set up and could utilize DPMS as well, ther |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.