source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/List%20of%20variational%20topics | This is a list of variational topics in from mathematics and physics. See calculus of variations for a general introduction.
Action (physics)
Averaged Lagrangian
Brachistochrone curve
Calculus of variations
Catenoid
Cycloid
Dirichlet principle
Euler–Lagrange equation cf. Action (physics)
Fermat's principle
Functional (mathematics)
Functional derivative
Functional integral
Geodesic
Isoperimetry
Lagrangian
Lagrangian mechanics
Legendre transformation
Luke's variational principle
Minimal surface
Morse theory
Noether's theorem
Path integral formulation
Plateau's problem
Prime geodesic
Principle of least action
Soap bubble
Soap film
Tautochrone curve
Variations |
https://en.wikipedia.org/wiki/Norton%20Insight | Norton Insight whitelists files based on reputation. Norton-branded antivirus software then leverages the data to skip known files during virus scans. Symantec claims quicker scans and more accurate detection with the use of the technology.
Development
Insight was codenamed Mr. Clean. Its initial aim was to help users determine what programs from the Internet are safe to install. Mr. Clean would provide a risk assessment to discern between safe and malicious files. However, its goal was later changed to making virus scans more efficient; instead of scanning every file, known files are skipped, cutting scanning times.
Basic introduction & usage
Norton Community Watch, a voluntary and anonymous service, allows a user's Norton product to forward information to Symantec servers. Among the data collected are the processes running and their SHA256 values. A reappearing hash value and its corresponding file are whitelisted, and Norton Insight checks the processes on a user's computer against the whitelist. Matching processes are excluded from scanning.
When a process is "trusted", it has been deemed safe and excluded from risk scanning. There are two trust levels; "standard" and "high". The third option is to disable Norton Insight. In standard trust, processes appearing in the majority of participants' computers are deemed safe. High trust, in addition, excludes digitally signed files from scanning.
Tamper protection
Norton analyzes the NTFS file system upon startup, and if unaccounted changes are found, trust values of the processes on the system are revoked.
In the case of a mistake, a revocation mechanism was implemented, where clients receive a list of revoked SHA256 values via LiveUpdate. If the client has a file matching a SHA256 and is currently trusting that file, all trust is revoked, and the file is once again scanned.
Norton File Insight was a feature released in Norton 2010 products.
Norton file/download insight
The Norton Download Insight feature, pr |
https://en.wikipedia.org/wiki/Tungsten%20disilicide | Tungsten silicide (WSi2) is an inorganic compound, a silicide of tungsten. It is an electrically conductive ceramic material.
Chemistry
Tungsten silicide can react violently with substances such as strong acids, fluorine, oxidizers, and interhalogens.
Applications
It is used in microelectronics as a contact material, with resistivity 60–80 μΩ cm; it forms at 1000 °C. It is often used as a shunt over polysilicon lines to increase their conductivity and increase signal speed. Tungsten silicide layers can be prepared by chemical vapor deposition, e.g. using monosilane or dichlorosilane with tungsten hexafluoride as source gases. The deposited film is non-stoichiometric, and requires annealing to convert to more conductive stoichiometric form. Tungsten silicide is a replacement for earlier tungsten films. Tungsten silicide is also used as a barrier layer between silicon and other metals, e.g. tungsten.
Tungsten silicide is also of value towards use in microelectromechanical systems, where it is mostly applied as thin films for fabrication of microscale circuits. For such purposes, films of tungsten silicide can be plasma-etched using e.g. nitrogen trifluoride gas.
WSi2 performs well in applications as oxidation-resistant coatings. In particular, in similarity to Molybdenum disilicide, MoSi2, the high emissivity of tungsten disilicide makes this material attractive for high temperature radiative cooling, with implications in heat shields. |
https://en.wikipedia.org/wiki/Thermal%20de%20Broglie%20wavelength | In physics, the thermal de Broglie wavelength (, sometimes also denoted by ) is roughly the average de Broglie wavelength of particles in an ideal gas at the specified temperature. We can take the average interparticle spacing in the gas to be approximately where is the volume and is the number of particles. When the thermal de Broglie wavelength is much smaller than the interparticle distance, the gas can be considered to be a classical or Maxwell–Boltzmann gas. On the other hand, when the thermal de Broglie wavelength is on the order of or larger than the interparticle distance, quantum effects will dominate and the gas must be treated as a Fermi gas or a Bose gas, depending on the nature of the gas particles. The critical temperature is the transition point between these two regimes, and at this critical temperature, the thermal wavelength will be approximately equal to the interparticle distance. That is, the quantum nature of the gas will be evident for
i.e., when the interparticle distance is less than the thermal de Broglie wavelength; in this case the gas will obey Bose–Einstein statistics or Fermi–Dirac statistics, whichever is appropriate. This is for example the case for electrons in a typical metal at T = 300 K, where the electron gas obeys Fermi–Dirac statistics, or in a Bose–Einstein condensate. On the other hand, for
i.e., when the interparticle distance is much larger than the thermal de Broglie wavelength, the gas will obey Maxwell–Boltzmann statistics. Such is the case for molecular or atomic gases at room temperature, and for thermal neutrons produced by a neutron source.
Massive particles
For massive, non-interacting particles, the thermal de Broglie wavelength can be derived from the calculation of the partition function. Assuming a 1-dimensional box of length , the partition function (using the energy states of the 1D particle in a box) is
Since the energy levels are extremely close together, we can approximate this sum as an integral:
|
https://en.wikipedia.org/wiki/Combinatorics%3A%20The%20Rota%20Way | Combinatorics: The Rota Way is a mathematics textbook on algebraic combinatorics, based on the lectures and lecture notes of Gian-Carlo Rota in his courses at the Massachusetts Institute of Technology. It was put into book form by Joseph P. S. Kung and Catherine Yan, two of Rota's students, and published in 2009 by the Cambridge University Press in their Cambridge Mathematical Library book series, listing Kung, Rota, and Yan as its authors (ten years posthumously in the case of Rota). The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries.
Topics
Combinatorics: The Rota Way has six chapters, densely packed with material: each could be "a basis for a course at the Ph.D. level". Chapter 1, "Sets, functions and relations", also includes material on partially ordered sets, lattice orders, entropy (formulated in terms of partitions of a set), and probability. The topics in Chapter 2, "Matching theory", as well as matchings in graphs, include incidence matrices, submodular set functions, independent matchings in matroids, the Birkhoff–von Neumann theorem on the Birkhoff polytope of doubly stochastic matrices, and the Gale–Ryser theorem on row and column sums of (0,1) matrices. Chapter 3 returns to partially ordered sets and lattices, including material on Möbius functions of incidence algebras, Sperner's theorem on antichains in power sets, special classes of lattices, valuation rings, and Dilworth's theorem on partitions into chains.
One of the things Rota became known for, in the 1970s, was the revival of the umbral calculus as a general technique for the formal manipulation of power series and generating functions, and this is the subject of Chapter 4. Other topics in this chapter include Sheffer sequences of polynomials, and the Riemann zeta function and its combinatorial interpretation. Chapter 5 concerns symmetric functions and Rota–Baxter algebras, including symmetric function |
https://en.wikipedia.org/wiki/Working%E2%80%93Hotelling%20procedure | In statistics, particularly regression analysis, the Working–Hotelling procedure, named after Holbrook Working and Harold Hotelling, is a method of simultaneous estimation in linear regression models. One of the first developments in simultaneous inference, it was devised by Working and Hotelling for the simple linear regression model in 1929. It provides a confidence region for multiple mean responses, that is, it gives the upper and lower bounds of more than one value of a dependent variable at several levels of the independent variables at a certain confidence level. The resulting confidence bands are known as the Working–Hotelling–Scheffé confidence bands.
Like the closely related Scheffé's method in the analysis of variance, which considers all possible contrasts, the Working–Hotelling procedure considers all possible values of the independent variables; that is, in a particular regression model, the probability that all the Working–Hotelling confidence intervals cover the true value of the mean response is the confidence coefficient. As such, when only a small subset of the possible values of the independent variable is considered, it is more conservative and yields wider intervals than competitors like the Bonferroni correction at the same level of confidence. It outperforms the Bonferroni correction as more values are considered.
Statement
Simple linear regression
Consider a simple linear regression model , where is the response variable and the explanatory variable, and let and be the least-squares estimates of and respectively. Then the least-squares estimate of the mean response at the level is . It can then be shown, assuming that the errors independently and identically follow the normal distribution, that an confidence interval of the mean response at a certain level of is as follows:
where is the mean squared error and denotes the upper percentile of Student's t-distribution with degrees of freedom.
However, as multiple mean res |
https://en.wikipedia.org/wiki/MICdb | MICdb (Microsatellites database) is a database of non-redundant microsatellites from prokaryotic genomes.
See also
InSatDb
Microsatellite |
https://en.wikipedia.org/wiki/Leucotome | A leucotome or McKenzie leucotome is a surgical instrument used for performing leucotomies (also known as lobotomy) and other forms of psychosurgery.
Invented by Canadian neurosurgeon Dr. Kenneth G. McKenzie in the 1940s, the leucotome has a narrow shaft which is inserted into the brain through a hole in the skull, and then a plunger on the back of the leucotome is depressed to extend a wire loop or metal strip into the brain. The leucotome is then rotated, cutting a core of brain tissue. This type was used by the Nobel prize-winning Portuguese neurologist Egas Moniz.
Another, different, surgical instrument also called a leucotome was introduced by Walter Freeman for use in the transorbital lobotomy. Modeled after an ice-pick, it consisted simply of a pointed shaft. It was passed through the tear duct under the eyelid and against the top of the eyesocket. A mallet was used to drive the instrument through the thin layer of bone and into the brain along the plane of the bridge of the nose, to a depth of 5 cm. Due to incidents of breakage, a stronger but essentially identical instrument called an orbitoclast was later used.
Lobotomies were commonly performed from the 1930s to the 1960s, with a few as late as the 1980s in France.
See also
Orbitoclast
Lobotomy
Instruments used in general surgery
Notes
External links
A leucotome from the University of Manchester Medical School Museum
The Nobel Foundation page on prefrontal leukotomy
Neurosurgery
History of neuroscience
Surgical instruments
Lobotomy |
https://en.wikipedia.org/wiki/Isopeptide%20bond | An isopeptide bond is a type of amide bond formed between a carboxyl group of one amino acid and an amino group of another. An isopeptide bond is the linkage between the side chain amino or carboxyl group of one amino acid to the α-carboxyl, α-amino group, or the side chain of another amino acid. In a typical peptide bond, also known as eupeptide bond, the amide bond always forms between the α-carboxyl group of one amino acid and the α-amino group of the second amino acid. Isopeptide bonds are rarer than regular peptide bonds. Isopeptide bonds lead to branching in the primary sequence of a protein. Proteins formed from normal peptide bonds typically have a linear primary sequence.
Amide bonds, and thus isopeptide bonds, are stabilized by resonance (electron delocalization) between the carbonyl oxygen, the carbonyl carbon, and the nitrogen atom. The bond strength of an isopeptide bond is similar to that of a peptide due to the similar bonding type. The bond strength of a peptide bond is 2.3-3.6 kcal/mol.
Amino acids such as lysine, glutamic acid, glutamine, aspartic acid, and asparagine can form isopeptide bonds because they all contain an amino or carboxyl group on their side chain. For example, the formation of an isopeptide bond between the sidechains of lysine and glutamine is as follows:
Gln−(C=O)NH2 + Lys-NH3+ → Gln−(C=O)NH−Lys + NH4+
The ε-amino group of lysine can also react with the α-carboxyl group of any other amino acid as in the following reaction:
Ile-(C=O)O- + Lys-NH3+ → Ile-(C=O)NH-Lys + H2O
Isopeptide bond formation can be enzyme-catalyzed or occur spontaneously. The reaction between lysine and glutamine, as shown above, is catalyzed by a transglutaminase. Another example of enzyme-catalyzed isopeptide bond formation is the formation of the glutathione molecule. Glutathione, a tripeptide, contains a normal peptide bond (between cysteine and glycine) and an isopeptide bond (between glutamate and cysteine). The formation of the isopeptide bond |
https://en.wikipedia.org/wiki/Pelvic%20digit | A pelvic digit, pelvic finger, or pelvic rib is a rare congenital abnormality in humans, in which bone tissue develops in the soft tissue near the pelvis, resembling a rib or finger and often divided into one or more segments with pseudo-articulations. Pelvic digits are typically benign and asymptomatic, and are usually discovered accidentally. Approximately 41 cases have been reported.
The pelvic digit was first reported by D. Sullivan and W.S. Cornwell in 1974. Pelvic digits may be located at any level of the pelvis, the lower ribs, or even the anterior abdominal wall. It is theorized that pelvic digit anomalies arise during the mesenchymal stage of bone growth, within the first six weeks of embryogenesis. Their formation may result from a failure of the primordium of the coccyx to fuse to the vertebral column, leading to the independent development of a proto-rib structure.
See also
Supernumerary body part |
https://en.wikipedia.org/wiki/Temporal%20encroachment | Temporal encroachment is an action that affects the perception of time or that affects the ability to take action in the future. Temporal means related to the measurement or passing of time and encroachment is an intrusion, usually unwelcome, into the space of another.
The space that temporal encroachment refers to is temporal space, the temporal "space" or "territory" upon which others attach significance.
There are various kinds of temporal encroachment:
Scheduling
This is when one group or person delays another person. A good example is a vice president making a lower-ranked employee wait in his outer office while he conducts business. Another good example is a valued worker who always shows up late to work but this is allowed due to his abilities. This kind of encroachment is very common in the workplace. Various cultures look at this differently—the Japanese are very punctual, while most Latin and Baltic cultures would be more relaxed with time, instead looking at social parameters.
Future 'space'
This refers to actions taken that influence events 'down the line', narrowing possible choices and alternatives. It is worth noting that the most common use of this phrase is found in ecology, where there is a large amount of concern about the effects of human encroachment upon animals and other wildlife. Encroachment can be a good or bad factor in the lives of the animals, but most commonly, it is bad.
It has been used by some Jewish thinkers to refer to changes in Zionism that have 'ripple effects' on Jews worldwide, and the effect this has had on global terrorism.
It has also been used in the legal system, where some legal thinkers believe early correction of poverty and other socioeconomic ills can 'narrow' the likelihood of future criminality.
Past 'space'
It can also reflect a sense that modern events and interpretations of history alter and change our perceptions of history. For example, the involvement of the Papacy in the formation of the structure o |
https://en.wikipedia.org/wiki/Micronekton | A micronekton is a group of organisms of 2 to 20 cm in size which are able to swim independently of ocean currents. The word 'nekton' is derived from the Greek νήκτον, translit. nekton, meaning "to swim", and was coined by Ernst Haeckel in 1890.
Overview
Micronekton organisms are ubiquitous in the world's oceans and they can be divided into broad taxonomic groups. The distinction between micronekton and micro-, meso- and macro- zooplankton is based on size. Micronekton typically ranges in size from 2 to 20 cm, macro-zooplankton from 2 mm to 2 cm, meso-zooplankton from 0.2 to 2 mm and micro-zooplankton from 20 μm to 0.2 mm. Micronekton represents 3.8-11.8 billion tons of mesopelagic fishes worldwide, approximately 380 million tons of Antarctic krill in the Southern Ocean and a global estimated biomass of at least 55 million tons of a single group of Ommastrephid squid. This diverse group assemblage is distributed between the sea surface and approximately 1000 m deep (in the mesopelagic zone). Micronekton shows a diverse range of migration patterns including diel vertical migration over several hundreds of metres from below 400 m (deeper layers) to the top 200 m (shallower layers) of the water column at dusk and inversely at dawn, reverse migration (organisms stay in the shallow layer during the day) mid-water migration (organisms stay in the intermediate layer, i.e. between 200 and 400 m) or non-migration (organisms stay in the deep layer at night and shallow layer during the day). Micronekton plays a key role in the oceanic biological pump by transporting organic carbon from the euphotic zone to deeper parts of the oceans It is also preyed upon by various predators such as tunas, billfishes, sharks, marine birds and marine mammals.
Taxonomic groups
Generally, the taxonomy of global existing micronekton is not yet complete due to the paucity of faunal surveys, net avoidance (organisms sensing the approach of the net and swimming out of its path) and escapement ( |
https://en.wikipedia.org/wiki/Absolute%20irreducibility | In mathematics, a multivariate polynomial defined over the rational numbers is absolutely irreducible if it is irreducible over the complex field. For example, is absolutely irreducible, but while is irreducible over the integers and the reals, it is reducible over the complex numbers as and thus not absolutely irreducible.
More generally, a polynomial defined over a field K is absolutely irreducible if it is irreducible over every algebraic extension of K, and an affine algebraic set defined by equations with coefficients in a field K is absolutely irreducible if it is not the union of two algebraic sets defined by equations in an algebraically closed extension of K. In other words, an absolutely irreducible algebraic set is a synonym of an algebraic variety, which emphasizes that the coefficients of the defining equations may not belong to an algebraically closed field.
Absolutely irreducible is also applied, with the same meaning, to linear representations of algebraic groups.
In all cases, being absolutely irreducible is the same as being irreducible over the algebraic closure of the ground field.
Examples
A univariate polynomial of degree greater than or equal to 2 is never absolutely irreducible, due to the fundamental theorem of algebra.
The irreducible two-dimensional representation of the symmetric group S3 of order 6, originally defined over the field of rational numbers, is absolutely irreducible.
The representation of the circle group by rotations in the plane is irreducible (over the field of real numbers), but is not absolutely irreducible. After extending the field to complex numbers, it splits into two irreducible components. This is to be expected, since the circle group is commutative and it is known that all irreducible representations of commutative groups over an algebraically closed field are one-dimensional.
The real algebraic variety defined by the equation
is absolutely irreducible. It is the ordinary circle over the reals a |
https://en.wikipedia.org/wiki/DeepDream | DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like appearance reminiscent of a psychedelic experience in the deliberately overprocessed images.
Google's program popularized the term (deep) "dreaming" to refer to the generation of images that produce desired activations in a trained deep network, and the term now refers to a collection of related approaches.
History
The DeepDream software, originated in a deep convolutional network codenamed "Inception" after the film of the same name, was developed for the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) in 2014 and released in July 2015.
The dreaming idea and name became popular on the internet in 2015 thanks to Google's DeepDream program. The idea dates from early in the history of neural networks, and similar methods have been used to synthesize visual textures.
Related visualization ideas were developed (prior to Google's work) by several research groups.
After Google published their techniques and made their code open-source, a number of tools in the form of web services, mobile applications, and desktop software appeared on the market to enable users to transform their own photos.
Process
The software is designed to detect faces and other patterns in images, with the aim of automatically classifying images. However, once trained, the network can also be run in reverse, being asked to adjust the original image slightly so that a given output neuron (e.g. the one for faces or certain animals) yields a higher confidence score. This can be used for visualizations to understand the emergent structure of the neural network better, and is the basis for the DeepDream concept. This reversal procedure is never perfectly clear and unambiguous because it utilizes a one-to-many mapping process. However, after enough reiterations, even im |
https://en.wikipedia.org/wiki/PRMT7 | Protein arginine methyltransferase 7 is a protein that in humans is encoded by the PRMT7 gene. Arginine methylation is an apparently irreversible protein modification catalyzed by arginine methyltransferases, such as PMT7, using S-adenosylmethionine (AdoMet) as the methyl donor. Arginine methylation is implicated in signal transduction, RNA transport, and RNA splicing.
Model organisms
Model organisms have been used in the study of PRMT7 function. A conditional knockout mouse line, called Prmt7tm1a(EUCOMM)Wtsi was generated as part of the International Knockout Mouse Consortium program — a high-throughput mutagenesis project to generate and distribute animal models of disease to interested scientists.
Male and female animals underwent a standardized phenotypic screen to determine the effects of deletion. Twenty five tests were carried out on mutant mice and two significant abnormalities were observed. Fewer than expected homozygous mutant mice survived until weaning and those that did survive displayed evidence of chromosomal instability in a micronucleus test. |
https://en.wikipedia.org/wiki/Zero-dimensional%20space | In mathematics, a zero-dimensional topological space (or nildimensional space) is a topological space that has dimension zero with respect to one of several inequivalent notions of assigning a dimension to a given topological space. A graphical illustration of a nildimensional space is a point.
Definition
Specifically:
A topological space is zero-dimensional with respect to the Lebesgue covering dimension if every open cover of the space has a refinement which is a cover by disjoint open sets.
A topological space is zero-dimensional with respect to the finite-to-finite covering dimension if every finite open cover of the space has a refinement that is a finite open cover such that any point in the space is contained in exactly one open set of this refinement.
A topological space is zero-dimensional with respect to the small inductive dimension if it has a base consisting of clopen sets.
The three notions above agree for separable, metrisable spaces.
Properties of spaces with small inductive dimension zero
A zero-dimensional Hausdorff space is necessarily totally disconnected, but the converse fails. However, a locally compact Hausdorff space is zero-dimensional if and only if it is totally disconnected. (See for the non-trivial direction.)
Zero-dimensional Polish spaces are a particularly convenient setting for descriptive set theory. Examples of such spaces include the Cantor space and Baire space.
Hausdorff zero-dimensional spaces are precisely the subspaces of topological powers where is given the discrete topology. Such a space is sometimes called a Cantor cube. If is countably infinite, is the Cantor space.
Manifolds
All points of a zero-dimensional manifold are isolated.
In particular, the zero-dimensional hypersphere is a pair of points, and the zero-dimensional ball is a single point.
Notes |
https://en.wikipedia.org/wiki/Toxiferine | Toxiferine (C-toxiferine I) is a curare toxin. It is a bisindole alkaloid derived from Strychnos toxifera and a nicotinic acetylcholine receptor antagonist. This alkaloid is the main toxic component of Calabash curare, and one of the most toxic plant alkaloids known. The lethal dose (LD50) for mice has been determined as 10 - 60 µg/kg by intravenous administration.
It is a muscle relaxant that causes paralysis of skeletal muscle, which takes approximately 2 hours to recovery for a moderate dose, and 8 hours of total paralysis with a 20-fold paralytic dose. The paralysis can be antagonized by neostigmine |
https://en.wikipedia.org/wiki/Minimum%20redundancy%20feature%20selection | Minimum redundancy feature selection is an algorithm frequently used in a method to accurately identify characteristics of genes and phenotypes and narrow down their relevance and is usually described in its pairing with relevant feature selection as Minimum Redundancy Maximum Relevance (mRMR).
Feature selection, one of the basic problems in pattern recognition and machine learning, identifies subsets of data that are relevant to the parameters used and is normally called Maximum Relevance. These subsets often contain material which is relevant but redundant and mRMR attempts to address this problem by removing those redundant subsets. mRMR has a variety of applications in many areas such as cancer diagnosis and speech recognition.
Features can be selected in many different ways. One scheme is to select features that correlate strongest to the classification variable. This has been called maximum-relevance selection. Many heuristic algorithms can be used, such as the sequential forward, backward, or floating selections.
On the other hand, features can be selected to be mutually far away from each other while still having "high" correlation to the classification variable. This scheme, termed as Minimum Redundancy Maximum Relevance (mRMR) selection has been found to be more powerful than the maximum relevance selection.
As a special case, the "correlation" can be replaced by the statistical dependency between variables. Mutual information can be used to quantify the dependency. In this case, it is shown that mRMR is an approximation to maximizing the dependency between the joint distribution of the selected features and the classification variable.
Studies have tried different measures for redundancy and relevance measures. A recent study compared several measures within the context of biomedical images. |
https://en.wikipedia.org/wiki/Convergence%20in%20measure | Convergence in measure is either of two distinct mathematical concepts both of which generalize
the concept of convergence in probability.
Definitions
Let be measurable functions on a measure space . The sequence is said to converge globally in measure to if for every ,
,
and to converge locally in measure to if for every and every with
,
.
On a finite measure space, both notions are equivalent. Otherwise, convergence in measure can refer to either global convergence in measure or local convergence in measure, depending on the author.
Properties
Throughout, f and fn (n N) are measurable functions X → R.
Global convergence in measure implies local convergence in measure. The converse, however, is false; i.e., local convergence in measure is strictly weaker than global convergence in measure, in general.
If, however, or, more generally, if f and all the fn vanish outside some set of finite measure, then the distinction between local and global convergence in measure disappears.
If μ is σ-finite and (fn) converges (locally or globally) to f in measure, there is a subsequence converging to f almost everywhere. The assumption of σ-finiteness is not necessary in the case of global convergence in measure.
If μ is σ-finite, (fn) converges to f locally in measure if and only if every subsequence has in turn a subsequence that converges to f almost everywhere.
In particular, if (fn) converges to f almost everywhere, then (fn) converges to f locally in measure. The converse is false.
Fatou's lemma and the monotone convergence theorem hold if almost everywhere convergence is replaced by (local or global) convergence in measure.
If μ is σ-finite, Lebesgue's dominated convergence theorem also holds if almost everywhere convergence is replaced by (local or global) convergence in measure.
If X = [a,b] ⊆ R and μ is Lebesgue measure, there are sequences (gn) of step functions and (hn) of continuous functions converging globally in measure to f.
If f and fn |
https://en.wikipedia.org/wiki/Ho%C3%A0ng%20Xu%C3%A2n%20S%C3%ADnh | Hoàng Xuân Sính (born September 8, 1933) is a Vietnamese mathematician, a student of Grothendieck, the first female mathematics professor in Vietnam, the founder of , and a recipient of the Ordre des Palmes Académiques.
Early life and career
Hoàng was born in Cót, in the Từ Liêm District of Vietnam, one of seven children of fabric merchant Hoàng Thuc Tan. Her mother died when she was eight years old, and she was raised by a stepmother. She has also frequently been said to be the granddaughter of Vietnamese mathematician Hoàng Xuân Hãn. She completed a bachelor's degree in 1951 in Hanoi, studying English and French, and then traveled to Paris for a second baccalaureate in mathematics. She stayed in France to study for an agrégation at the University of Toulouse, which she completed in 1959, before returning to Vietnam to become a mathematics teacher at the Hanoi National University of Education. Hoàng became the first female mathematics professor in Vietnam and at that time was one of a very small number of mathematicians there with a foreign education.
Work with Grothendieck
The French mathematician and pacifist Alexander Grothendieck visited North Vietnam in late 1967, during the Vietnam War, and spent a month teaching mathematics to the Hanoi University mathematics department staff, including Hoàng, who took the notes for the lectures. Because of the war, Grothendieck's lectures were held away from Hanoi, first in the nearby countryside and later in Đại Từ. After Grothendieck returned to France, he continued to teach Hoàng as a correspondence student. She earned her doctorate under Grothendieck's supervision from Paris Diderot University in 1975, with a handwritten thesis. Her thesis research, on algebraic structures based on categorical groups but with a group law that holds only up to isomorphism, prefigured much of the modern theory of 2-groups.
Later accomplishments
When she was promoted to full professor Hoàng became the first female full professor in Viet |
https://en.wikipedia.org/wiki/Flatiron%20School | Flatiron School is an educational organization founded in 2012 by Adam Enbar and Avi Flombaum. The organization is based in New York City and teaches software engineering, computer programming, data science, product design, and cybersecurity engineering. In 2017, the company was sued for making false statements about the earning potential of its graduates. It was acquired by WeWork in 2017 and sold to Carrick Capital Partners in 2020.
History
Flatiron School was founded in 2012 by Adam Enbar and Avi Flombaum.
In 2017, the New York State Attorney General sued Flatiron School for operating without a license and making false statements about the earning potential of its graduates. The two parties reached a $375,000 settlement. Flatiron School claimed a 98.5% employment rate but this included apprentices and freelance workers, while the claimed average salary of $74,447 only included graduates in full-time employment.
In 2018, Yale University announced a collaboration with the Flatiron School during Yale's "Summer Session" — together, the institutions offered a Web Development Bootcamp for Summer 2019, which offered two Yale College credits for students.
The organization has made efforts to promote parity in tech, working with other companies to sponsor course scholarships for women, LGBTQ+ people, and members of underserved communities.
Takeovers and acquisitions
Flatiron School was acquired by WeWork, a collaborative workspace company, in October 2017. Following the acquisition, they launched Access Labs, a joint effort to make tech education accessible to low-income earners in New York. In August 2018, Flatiron School acquired Designation, a Chicago-based UX/UI design school, and expanded design courses elsewhere in December 2018.
Since being acquired by WeWork, the company has expanded, opening campuses in Atlanta, Austin, Chicago, Dallas, Denver, Houston, London, San Francisco, Seattle, and Washington, D.C.
In 2020, WeWork sold Flatiron School to Carrick C |
https://en.wikipedia.org/wiki/Nutty%20Narrows%20Bridge | The Nutty Narrows Bridge is a squirrel bridge in Longview, Washington, United States. It spans Olympia Way near R. A. Long Park in downtown Longview, comprising a catenary bridge with a center section resembling a suspension bridge. The bridge was built by local contractor Amos Peters in 1963 and named by a city councilwoman, in a likely nod to the Tacoma Narrows Bridge.
The bridge was proposed after local tenants noticed that several squirrels had died while crossing the street in search of nuts. The proposal garnered national attention and was quickly approved by the city council. It was installed on March 19, 1963, and saw use by squirrels the following day. The bridge was removed for repairs and renovations several times in the late 20th century and remains a symbol of Longview.
The Nutty Narrows was moved from its original location in 2005 following the discovery of termite damage in the oak trees holding up its structure. Its new location, in the middle of a traffic circle, was determined to be a distraction to motorists and prompted a second move in 2010. The bridge inspired the construction of several other squirrel crossings in Longview and the original Nutty Narrows was added to the National Register of Historic Places in 2014.
History
Conception and construction
Before the bridge was built, several squirrels were killed while crossing busy streets that separated large trees in R. A. Long Park from an area with plentiful nuts. Several tenants at a nearby office building proposed the construction of a dedicated bridge for squirrels as early as 1960 and received approval from the Longview City Council on February 28, 1963. The bridge was named the "Nutty Narrows" by councilwoman Bess LaRivere, likely as a reference to the Tacoma Narrows Bridge. The proposal brought international attention to Longview, as it was republished in newspapers, magazines, and reported on by radio stations.
The bridge was designed by local architects Robert Newhall and LeRoy |
https://en.wikipedia.org/wiki/Developmental%20Cell | Developmental Cell is a peer-reviewed scientific journal of cell and developmental biology. The journal was established in 2001, and is edited by Julie Sollier. It published by Cell Press, an imprint of Elsevier, and its articles becomes open access after an embargo period of one year.
External links
Cell Press academic journals
Delayed open access journals
Developmental biology journals
Molecular and cellular biology journals
Academic journals established in 2001 |
https://en.wikipedia.org/wiki/Champernowne%20constant | In mathematics, the Champernowne constant is a transcendental real constant whose decimal expansion has important properties. It is named after economist and mathematician D. G. Champernowne, who published it as an undergraduate in 1933.
For base 10, the number is defined by concatenating representations of successive integers:
.
Champernowne constants can also be constructed in other bases, similarly, for example:
.
The Champernowne word or Barbier word is the sequence of digits of C10 obtained by writing it in base 10 and juxtaposing the digits:
More generally, a Champernowne sequence (sometimes also called a Champernowne word) is any sequence of digits obtained by concatenating all finite digit-strings (in any given base) in some recursive order.
For instance, the binary Champernowne sequence in shortlex order is
where spaces (otherwise to be ignored) have been inserted just to show the strings being concatenated.
Properties
A real number x is said to be normal if its digits in every base follow a uniform distribution: all digits being equally likely, all pairs of digits equally likely, all triplets of digits equally likely, etc. x is said to be normal in base b if its digits in base b follow a uniform distribution.
If we denote a digit string as [a0, a1, …], then, in base 10, we would expect strings [0], [1], [2], …, [9] to occur 1/10 of the time, strings [0,0], [0,1], …, [9,8], [9,9] to occur 1/100 of the time, and so on, in a normal number.
Champernowne proved that is normal in base 10, while Nakai and Shiokawa proved a more general theorem, a corollary of which is that is normal in base for any b. It is an open problem whether is normal in bases .
Kurt Mahler showed that the constant is transcendental.
The irrationality measure of is , and more generally for any base .
The Champernowne word is a disjunctive sequence.
Series
The definition of the Champernowne constant immediately gives rise to an infinite series representation invol |
https://en.wikipedia.org/wiki/David%20Lubinski | David J. Lubinski is an American psychology professor known for his work in applied research, psychometrics, and individual differences. His work (with Camilla Benbow) has focussed on exceptionally able children: the nature of exceptional ability, the development of people with exceptional ability (in particular meeting the educational needs of gifted children to maximise their talent). He has published widely on the impact of extremely high ability on outputs such as publications, creative writing and art, patents etc. This work disconfirmed the "threshold hypothesis" which suggested that a certain minimum of IQ might be needed, but higher IQ did not translate into greater productivity or creativity. Instead his work shows that higher intelligence leads to higher outcomes with no apparent threshold or dropping off of its impact.
Education
He earned his B.A. and PhD from the University of Minnesota in 1981 and 1987 respectively. He was a Postdoctoral Fellow at University of Illinois at Urbana-Champaign from 1987 to 1990 with Lloyd G. Humphreys. He taught at Iowa State University from 1990 to 1998 and took a position at Vanderbilt University in 1998, where he currently co-directs the Study of Mathematically Precocious Youth (SMPY), a longitudinal study of intellectual talent, with Camilla Benbow.
In 1994, he was one of 52 signatories on "Mainstream Science on Intelligence", an editorial written by Linda Gottfredson and published in The Wall Street Journal, which declared the consensus of the signing scholars on issues related to intelligence research following the publication of the book The Bell Curve.
In 1996, he won the American Psychological Association Distinguished Scientific Award for Early Career Contribution to Psychology (Applied Research/Psychometrics). In 2006, he received the Distinguished Scholar Award from the National Association for Gifted Children (NAGC). In addition to this, his work has earned several Mensa Awards for Research Excellence and |
https://en.wikipedia.org/wiki/CACNB2 | Voltage-dependent L-type calcium channel subunit beta-2 is a protein that in humans is encoded by the CACNB2 gene.
Clinical significance
Mutation in the CACNB2 gene are associated with Brugada syndrome, autism, attention deficit-hyperactivity disorder (ADHD), bipolar disorder, major depressive disorder, and schizophrenia.
See also
Voltage-dependent calcium channel |
https://en.wikipedia.org/wiki/List%20of%20online%20marketplaces | This is a non-exhaustive list of online marketplaces. |
https://en.wikipedia.org/wiki/The%20Discoverers | The Discoverers is a non-fiction historical work by Daniel Boorstin, published in 1983, and is the first in the Knowledge Trilogy, which also includes The Creators and The Seekers. The book, subtitled A History of Man's Search to Know His World and Himself, is a history of human discovery. Discovery in many forms is described: exploration, science, medicine, mathematics, and more-theoretical ones, such as time, evolution, plate tectonics, and relativity. Boorstin praises the inventive, human mind and its eternal quest to discover the universe and humanity's place in it.
In "A Personal Note to the Reader", Boorstin writes "My hero is Man, the Discoverer. The world we now view from the literate West ... had to be opened by countless Columbuses. In the deep recesses of the past, they remain anonymous." The structure of the book is topical and chronological, beginning in the prehistoric era in Babylon and Egypt.
Themes
The Discoverers (as well as The Creators and The Seekers) resonates with tales of individuals, their lives, beliefs and accomplishments. They form the building blocks of his tale and from them flow descriptions and commentary on historical events. In this respect he is like other historians (David McCullough, Paul Johnson, Louis Hartz and Richard Hofstadter, to name a few) who give prominence to the individual and the incremental approach to history. Thus, in the chapter "In Search of the Missing Link", he features Edward Tyson and his contributions in comparative anatomy. Tycho Brahe, the Danish astronomer, is the guiding light in "The Witness of the Naked Eye" and Isaac Newton merits an entire chapter ("God said, Let Newton Be!") devoted to his life and accomplishments.
The role of religion and culture is another recurring theme. Boorstin, a reform Jew, has been described as a "secular, skeptical moderate Northeastern liberal of the New Deal rather than the New Left school." The purpose of religion (and God) was not personal salvation but establish |
https://en.wikipedia.org/wiki/Jacobian | In mathematics, a Jacobian, named for Carl Gustav Jacob Jacobi, may refer to:
Jacobian matrix and determinant
Jacobian elliptic functions
Jacobian variety
Intermediate Jacobian
Mathematical terminology |
https://en.wikipedia.org/wiki/Badlands%20%281984%20video%20game%29 | is a video game released and developed by Konami on the LaserDisc system in 1984 and published by both Konami and Centuri. It first debuted at the Amusement and Music Operators Association (AMOA) Show in October 1983 and was later released to the public in early 1984. In addition to its LaserDisc version, two versions of a Badlands video game cabinet exist, one produced by Konami, and one by Centuri.
Badlands follows a cowboy named Buck seeking vengeance on a gang of outlaws and its leader, Landolf, for the murder of his wife and children.
Gameplay
Badlands is a first-person shooter action-adventure video game set in a wild west fantasy world. The game's arcade cabinet consists of one large "shoot" button. Badlands' gameplay consists of animated cutscenes, requiring players to shoot and react to environmental hazards and enemies. The game uses a life system, granting the player three lives upon starting. Losing all lives ends the game. The aim of the game is to eliminate outlaws and claim their bounties.
Reception
In Japan, Game Machine listed Badlands on their September 15, 1984, issue as the second most-successful upright arcade unit of the month. |
https://en.wikipedia.org/wiki/129%20%28number%29 | 129 (one hundred [and] twenty-nine) is the natural number following 128 and preceding 130.
In mathematics
129 is the sum of the first ten prime numbers. It is the smallest number that can be expressed as a sum of three squares in four different ways: , , , and .
129 is the product of only two primes, 3 and 43, making 129 a semiprime. Since 3 and 43 are both Gaussian primes, this means that 129 is a Blum integer.
129 is a repdigit in base 6 (333).
129 is a happy number.
129 is a centered octahedral number.
In the military
Raytheon AGM-129 ACM (Advanced Cruise Missile) was a low observable, sub-sonic, jet-powered, air-launched cruise missile used by the United States Air Force
Soviet submarine K-129 (1960) was a Soviet Pacific Fleet nuclear submarine that sank in 1968
was a United States Navy Mission Buenaventura-class fleet oilers during World War II
was a Crosley-class high speed transport of the United States Navy
was the lead ship of her class of destroyer escort in the United States Navy
was a United States Navy Haskell-class attack transport during World War II
was a United States Navy Crater-class cargo ship during World War II
was a United States Navy Auk-class minesweeper for removing naval mines laid in the water
Agusta A129 Mangusta is an attack helicopter originally designed and produced by Italian company Agusta
The 129th Rescue Wing (129 RQW) is a unit of the California Air National Guard
In transportation
LZ 129 Hindenburg was a German zeppelin which went up in flames while landing on May 6, 1937
London Buses route 129 is a Transport for London contracted bus route in London
STS-129 was a Space Shuttle mission to the International Space Station, flown in November 2009 by the shuttle Atlantis''.
In other fields
129 is also:
The year AD 129 or 129 BC
129 AH is a year in the Islamic calendar that corresponds to 746–747 CE
129 Antigone is a main belt asteroid
The atomic number of unbiennium, an element yet to be discovered
A |
https://en.wikipedia.org/wiki/Conference%20on%20Neural%20Information%20Processing%20Systems | The Conference and Workshop on Neural Information Processing Systems (abbreviated as NeurIPS and formerly NIPS) is a machine learning and computational neuroscience conference held every December. The conference is currently a double-track meeting (single-track until 2015) that includes invited talks as well as oral and poster presentations of refereed papers, followed by parallel-track workshops that up to 2013 were held at ski resorts.
History
The NeurIPS meeting was first proposed in 1986 at the annual invitation-only Snowbird Meeting on Neural Networks for Computing organized by The California Institute of Technology and Bell Laboratories. NeurIPS was designed as a complementary open interdisciplinary meeting for researchers exploring biological and artificial Neural Networks. Reflecting this multidisciplinary approach, NeurIPS began in 1987 with information theorist Ed Posner as the conference president and learning theorist Yaser Abu-Mostafa as program chairman. Research presented in the early NeurIPS meetings included a wide range of topics from efforts to solve purely engineering problems to the use of computer models as a tool for understanding biological nervous systems. Since then, the biological and artificial systems research streams have diverged, and recent NeurIPS proceedings have been dominated by papers on machine learning, artificial intelligence and statistics.
From 1987 until 2000 NeurIPS was held in Denver, United States. Since then, the conference was held in Vancouver, Canada (2001–2010), Granada, Spain (2011), and Lake Tahoe, United States (2012–2013). In 2014 and 2015, the conference was held in Montreal, Canada, in Barcelona, Spain in 2016, in Long Beach, United States in 2017, in Montreal, Canada in 2018 and Vancouver, Canada in 2019. Reflecting its origins at Snowbird, Utah, the meeting was accompanied by workshops organized at a nearby ski resort up until 2013, when it outgrew ski resorts.
The first NeurIPS Conference was spons |
https://en.wikipedia.org/wiki/Build%20the%20Earth | Build the Earth (BTE) is a project dedicated to creating a 1:1 scale model of Earth within the sandbox video game Minecraft.
History
Build The Earth was created by YouTuber PippenFTS in March 2020 as a collaborative effort to recreate Earth in the video game Minecraft. In a YouTube video, PippenFTS called for prospective participants to recreate man-made structures over a rudimentary model of Earth's terrain. A Discord server created to help coordinate the project attracted over a hundred thousand users by April 2020.
Minecraft developer Mojang Studios featured the project on their website on Earth Day 2020. In July 2020, YouTuber MrBeast released a video where he and 50 other people built his hometown of Raleigh, North Carolina within the project.
Software
The Build The Earth project primarily depends on two Minecraft modifications to function: Cubic Chunks and Terra++. Cubic Chunks removes Minecrafts limitation on building structures beyond a certain height. Terra++ uses information from geographic data services, such as OpenStreetMap, to automatically generate terrain to ease the building process. The project originally used the Terra 1-to-1 mod instead of Terra++. PippenFTS stated that "with the Cubic Chunks mod breaking Minecraft's vertical limitations, we can now experience the Earth in Minecraft, just as it is, with no downscaling of any kind." |
https://en.wikipedia.org/wiki/War%20Research%20Service | The War Research Service (WRS) was a civilian agency of the United States government established during World War II to pursue research relating to biological warfare. Established in May 1942 by Secretary of War Henry L. Stimson, the WRS was embedded in the Federal Security Agency, the federal agency that administered Social Security and other New Deal programs in the administration of President Franklin D. Roosevelt. Headed by George W. Merck, president of the Merck & Co. pharmaceutical firm, the WRS was headquartered at Fort Detrick, Maryland.
Being a civilian agency, the WRS was initially tasked to supervise the military Chemical Warfare Service's biological program.
However, the WRS was disbanded in 1944, and the weapons research was continued under the exclusive oversight of the CWS. |
https://en.wikipedia.org/wiki/Jada%20Toys | Jada Toys, Inc. is an American manufacturer of collectible scale model cars, figures, radio controlled model vehicles, and dolls. It was founded in 1999 by Jack and May Li. Jada's products are predominantly aimed at the collectible market, and are available and popular at retail outlets worldwide.
The company has or has had license rights to market products from a wide range of entertainment companies and franchises as well as sports associations, including DC Comics, Disney, Marvel Comics, NASCAR, NBA, WWE, Fast & Furious, and Hello Kitty, among others.
History
Founded by Jack and May Li in 1999, Jada's first toy was a 1:24 scale die-cast 1953 Chevrolet tow truck, part of their Thunder Crusher line. Though the Chevrolet tow truck toy and other lines proved successful, the company remained in obscurity until the introduction of the urban-themed DUB City brand. Launched in collaboration with DUB Magazine in 2002, the line presents officially licensed vehicles with custom rims, lowered ride height and special in-car entertainment systems. Dubs, so named for their 20-inch or bigger wheels, are among the company's best sellers and the most visible in retail outlets.
The company launched the DUB City spinoff Chub City in 2005. Targeted at a younger generation of collectors, the line included heavily stylized vehicles and a story told through webisodes and comics. The human characters featured in the story inspired toys of their own. By 2007, the line had done over $12.5 million in sales and was featured in a Burger King kids meal promotion. In 2009, Jada sold the brand to Dentsu Entertainment; who, in conjunction with Fuel Entertainment and Nelvana, planned on launching a $15 million 52-episode animated series in late 2015. At the time of the sale, Chub City toys had sold over 20 million units.
Branching out of automotive licensing, in 2008 Jada Toys teamed up with Activision to release the Guitar Hero Air Guitar Rocker. The toy consists of a belt buckle, a portable |
https://en.wikipedia.org/wiki/Kre-O | KRE-O is a line of construction toys (similar to Lego and Mega Bloks) manufactured by South Koreabased Oxford and marketed by Hasbro. Kre-O was released in stores in Fall 2011. The name Kre-O comes from the Latin word creo, which means "I create".
Kre-O toys feature highly articulated humanoid figures called "Kreons". Kre-O blocks are compatible with Lego bricks and Lego minifigures, hence also compatible with Mega Bloks and other building block brands.
Toy lines
Kre-O Transformers is the first line of the Kre-O series. They were first shown in February 2011 at the American International Toy Fair trade show in New York. Transformers Kre-O figures include homages to their live action film, Timelines, Transformers: Prime and Beast Hunters sub-lines.
Kre-O G.I. Joe was released as the third Kre-O line in February 2013 as a Toys "R" Us exclusive. This collection is based primarily on Hasbro's G.I.Joe: A Real American Hero toy line, cartoons, and comic book series, but includes some Adventure Team Kreons as well.
Kre-O Star Trek was released as the fourth line of Kre-O sets in April 2013. The initial sets and Kreons are based on the 2009 reboot and its sequel Star Trek Into Darkness. A preview trailer was posted by Hasbro, first on Facebook, then later on YouTube. The trailer re-enacts the original "teaser trailer" for the first film, showing the construction of the U.S.S. Enterprise in Kre-O form and by Kreon construction workers. A press release made at the 2012 New York Toy Fair showed the completed Enterprise model (complete with flip-up "saucer" showing the ship's Bridge) and Kreons of Kirk, Spock and Sulu.
At ComiCon International 2013, Hasbro announced new brand lines for Dungeons & Dragons and Cityville Invasion plus additional building sets for Star Trek, GI Joe and Transformers brand lines.
Kre-O Cityville Invasion is the fifth line of Kre-O sets, based on the popular CityVille online game series. This line introduces "Sonic Motion Technology", which trigg |
https://en.wikipedia.org/wiki/Preferential%20looking | Preferential looking is an experimental method in developmental psychology used to gain insight into the young mind/brain. The method as used today was developed by the developmental psychologist Robert L. Fantz in the 1960s.
The Preferential Looking Technique
According to the American Psychological Association, the preferential looking technique is "an experimental method for assessing the perceptual capabilities of nonverbal individuals (e.g., human infants, nonhuman animals)". If the average infant looks longer at a novel stimulus compared to a familiar stimulus, this suggests that the infant can discriminate between the stimuli. This method has been used extensively in cognitive science and developmental psychology to assess the character of infants' perceptual systems, and, by extension, innate cognitive faculties. An investigator or examiner observes an infant's eye movements to determine which stimulus the infant fixates on.
Robert L. Fantz
Robert L. Fantz (1925-1981) was a developmental psychologist who launched several studies on infant perception including the preferential looking paradigm. Fantz introduced this paradigm in 1961 while working at the Case Western Reserve University. The preferential looking paradigm is used in studies of infants regarding cognitive development and categorization. Fantz's study showed that infants looked at patterned images longer than uniform images. He later built upon his study in 1964 to include habituation situations. These situations exhibited an infants preference for new or unusual stimuli.
Summary of Findings
Conclusions have been drawn from preferential looking experiments about the knowledge that infants possess. For example, if infants discriminate between rule-following and rule-violating stimuli—say, by looking longer, on average, at the latter than the former—then it has sometimes been concluded that infants know the rule.
Here is an example: 100 infants are shown an object that appears to teleport, viol |
https://en.wikipedia.org/wiki/Alpha%20sheet | Alpha sheet (also known as alpha pleated sheet or polar pleated sheet) is an atypical secondary structure in proteins, first proposed by Linus Pauling and Robert Corey in 1951. The hydrogen bonding pattern in an alpha sheet is similar to that of a beta sheet, but the orientation of the carbonyl and amino groups in the peptide bond units is distinctive; in a single strand, all the carbonyl groups are oriented in the same direction on one side of the pleat, and all the amino groups are oriented in the same direction on the opposite side of the sheet. Thus the alpha sheet accumulates an inherent separation of electrostatic charge, with one edge of the sheet exposing negatively charged carbonyl groups and the opposite edge exposing positively charged amino groups. Unlike the alpha helix and beta sheet, the alpha sheet configuration does not require all component amino acid residues to lie within a single region of dihedral angles; instead, the alpha sheet contains residues of alternating dihedrals in the traditional right-handed (αR) and left-handed (αL) helical regions of Ramachandran space. Although the alpha sheet is only rarely observed in natural protein structures, it has been speculated to play a role in amyloid disease and it was found to be a stable form for amyloidogenic proteins in molecular dynamics simulations. Alpha sheets have also been observed in X-ray crystallography structures of designed peptides.
The regular formation of alpha-sheet by unfolded proteins inevitably involves many L amino acid residues readily adopting the alphaL conformation, which appears at first sight to go against textbook chemistry, which is that, of the 20 amino acids, it is glycine that strongly favours this conformation. The conundrum is resolved by realizing that the alphaL region comprises two overlapping areas, here called γL and αL, which should be considered separately. It turns out that, while the γL conformation is adopted, almost exclusively, by glycine, the αL confo |
https://en.wikipedia.org/wiki/Urban%20wild | An urban wild is a remnant of a natural ecosystem found in the midst of an otherwise highly developed urban area.
Utility
Urban wilds, particularly those of several acres or more, are often intact ecological systems that can provide essential ecosystem functions such as the filtering of urban run-off, the storing and slowing the flow of stormwater, amelioration of the warming effect of urban development, and generally benefiting local air quality.
Typically, urban wilds are home to native vegetation and animal life as well as some introduced species. Urban wilds are vital to species of migratory birds that have nested in a given area since prior to its urbanization.
Preservation
Without formal protection, urban wilds are vulnerable to development. However, achieving formal protection of a large urban wild can be difficult. Land tenure of a single ecological area can be complex, with multiple public and private entities owning adjacent properties.
Key strategies used in the preservation of urban wilds have included conservation restrictions that keep complex land tenure systems in place while protecting the entire landscape. Public/private partnerships have also been successful in protecting urban wilds.
The urban wilds prioritized by municipalities tend to be partial wetlands that perform a range of ecological services while contributing to the biological diversity of the region.
Passive parks
There is some discussion about whether natural areas that are not at an appropriate scale to perform significant ecosystem services should instead be categorized as passive parks as opposed to urban wilds. Smaller urban wilds are used for passive recreation and have less value to the city in terms of enhancing ecosystem function. |
https://en.wikipedia.org/wiki/Reutericyclin | Reutericyclin is a bacteriocin produced by the bacterium Lactobacillus reuteri that has potential use as a food preservative. Reutericyclin is a hydrophobic, negatively charged molecule with the molecular formula C20H31NO4. Reutericyclin disrupts the cell membrane of sensitive bacteria by acting as a proton ionophore. Reutericyclin has a broad spectrum of activity against Gram-positive bacteria, but has no effect on Gram-negative bacteria because the lipopolysaccharide (LPS) in the outer membrane of Gram-negative bacteria prevents access by hydrophobic compounds. |
https://en.wikipedia.org/wiki/National%20Center%20for%20Simulation | The National Center for Simulation (NCS) is an association of defense companies, government, academic, start-up companies, and industry members. NCS is located in the Central Florida Research Park, adjacent to Naval Support Activity Orlando, and the simulation headquarters of the Army, Navy, Marine Corps, Air Force, and the University of Central Florida in Orlando, Florida, USA.
The center is a catalyst for technology transfer from the military out to industry and industry bringing innovative technologies into Team Orlando. Additionally, supporting the development, understanding, and advancement of simulation and defense technologies. Its many goals include improving defense readiness, and facilitating other sectors of industry (cyber, education, energy, gaming, healthcare, transportation, and space) to extend their knowledge and applications of simulation.
History
NCS was created in 1993 to support collaboration among the defense industry, government, and academia.
NCS is headquartered in the Central Florida Research Park in Orlando, Florida which is home to the world's largest cluster for computer simulation and modeling, and more than 345 companies that are members of NCS. The ecosystem includes an approximately $7 billion in procurement dollars for modeling, simulation, and training companies, supported by the military simulation and training commands for the U.S. Army, the U.S. Navy, the U.S. Air Force and the U.S. Marine Corps. U.S. Coast Guard interests are handled by a liaison officer and staff embedded in the Navy's training systems organization.
Mr. George E. Cheros became President and CEO in July 2019. Dr. Neal Finkelstein became Chief Operating Officer in October 2019, https://militarysimulation.training/technology/ncs-hires-former-army-research-lab-chief/
Team Orlando
NCS is part of the "Team Orlando" partnership between military organizations, the modeling and simulation industry, and academic institutions working together to leverage resour |
https://en.wikipedia.org/wiki/Critical%20plane%20analysis | Critical plane analysis refers to the analysis of stresses or strains as they are experienced by a particular plane in a material, as well as the identification of which plane is likely to experience the most extreme damage. Critical plane analysis is widely used in engineering to account for the effects of cyclic, multiaxial load histories on the fatigue life of materials and structures. When a structure is under cyclic multiaxial loading, it is necessary to use multiaxial fatigue criteria that account for the multiaxial loading. If the cyclic multiaxial loading is nonproportional it is mandatory to use a proper multiaxial fatigue criteria. The multiaxial criteria based on the Critical Plane Method are the most effective criteria.
For the plane stress case, the orientation of the plane may be specified by an angle in the plane, and the stresses and strains acting on this plane may be computed via Mohr's circle. For the general 3D case, the orientation may be specified via a unit normal vector of the plane, and the associated stresses strains may be computed via a tensor coordinate transformation law.
The chief advantage of critical plane analysis over earlier approaches like Sines rule, or like correlation against maximum principal stress or strain energy density, is the ability to account for damage on specific material planes. This means that cases involving multiple out-of-phase load inputs, or crack closure can be treated with high accuracy. Additionally, critical plane analysis offers the flexibility to adapt to a wide range of materials. Critical plane models for both metals and polymers are widely used.
History
Modern procedures for critical plane analysis trace back to research published in 1973 in which M. W. Brown and K. J. Miller observed that fatigue life under multiaxial conditions is governed by the experience of the plane receiving the most damage, and that both tension and shear loads on the critical plane must be considered. |
https://en.wikipedia.org/wiki/Ground%20reaction%20force | In physics, and in particular in biomechanics, the ground reaction force (GRF) is the force exerted by the ground on a body in contact with it.
For example, a person standing motionless on the ground exerts a contact force on it (equal to the person's weight) and at the same time an equal and opposite ground reaction force is exerted by the ground on the person.
In the above example, the ground reaction force coincides with the notion of a normal force. However, in a more general case, the GRF will also have a component parallel to the ground, for example when the person is walking – a motion that requires the exchange of horizontal (frictional) forces with the ground.
The use of the word reaction derives from Newton's third law, which essentially states that if a force, called action, acts upon a body, then an equal and opposite force, called reaction, must act upon another body. The force exerted by the ground is conventionally referred to as the reaction, although, since the distinction between action and reaction is completely arbitrary, the expression ground action would be, in principle, equally acceptable.
The component of the GRF parallel to the surface is the frictional force. When slippage occurs the ratio of the magnitude of the frictional force to the normal force yields the coefficient of static friction.
GRF is often observed to evaluate force production in various groups within the community. One of these groups studied often are athletes to help evaluate a subject's ability to exert force and power. This can help create baseline parameters when creating strength and conditioning regimens from a rehabilitation and coaching standpoint. Plyometric jumps such as a drop-jump is an activity often used to build greater power and force which can lead to overall better ability on the playing field. When landing from a safe height in a bilateral comparisons on GRF in relation to landing with the dominant foot first followed by the non-dominant limb, litera |
https://en.wikipedia.org/wiki/Information%20projection | In information theory, the information projection or I-projection of a probability distribution q onto a set of distributions P is
.
where is the Kullback–Leibler divergence from q to p. Viewing the Kullback–Leibler divergence as a measure of distance, the I-projection is the "closest" distribution to q of all the distributions in P.
The I-projection is useful in setting up information geometry, notably because of the following inequality, valid when P is convex:
.
This inequality can be interpreted as an information-geometric version of Pythagoras' triangle-inequality theorem, where KL divergence is viewed as squared distance in a Euclidean space.
It is worthwhile to note that since and continuous in p,
if P is closed and non-empty, then there exists at least one minimizer to the optimization problem framed above. Furthermore, if P is convex, then the optimum distribution is unique.
The reverse I-projection also known as moment projection or M-projection is
.
Since the KL divergence is not symmetric in its arguments, the I-projection and the M-projection will exhibit different behavior. For I-projection, will typically
under-estimate the support of and will lock onto one of its modes. This is due to , whenever to make sure KL divergence stays finite. For M-projection, will typically over-estimate the support of . This is due to whenever to make sure KL divergence stays finite.
The concept of information projection can be extended to arbitrary f-divergences and other divergences.
See also
Sanov's theorem |
https://en.wikipedia.org/wiki/Three-dimensional%20integrated%20circuit | A three-dimensional integrated circuit (3D IC) is a MOS (metal-oxide semiconductor) integrated circuit (IC) manufactured by stacking as many as 16 or more ICs and interconnecting them vertically using, for instance, through-silicon vias (TSVs) or Cu-Cu connections, so that they behave as a single device to achieve performance improvements at reduced power and smaller footprint than conventional two dimensional processes. The 3D IC is one of several 3D integration schemes that exploit the z-direction to achieve electrical performance benefits in microelectronics and nanoelectronics.
3D integrated circuits can be classified by their level of interconnect hierarchy at the global (package), intermediate (bond pad) and local (transistor) level. In general, 3D integration is a broad term that includes such technologies as 3D wafer-level packaging (3DWLP); 2.5D and 3D interposer-based integration; 3D stacked ICs (3D-SICs); 3D heterogeneous integration; and 3D systems integration.; as well as true monolithic 3D ICs
International organizations such as the Jisso Technology Roadmap Committee (JIC) and the International Technology Roadmap for Semiconductors (ITRS) have worked to classify the various 3D integration technologies to further the establishment of standards and roadmaps of 3D integration. As of the 2010s, 3D ICs are widely used for NAND flash memory and in mobile devices.
Types
3D ICs vs. 3D Packaging
3D packaging refers to 3D integration schemes that rely on traditional interconnection methods such as wire bonding and flip chip to achieve vertical stacking. 3D packaging can be divided into 3D system in package (3D SiP) and 3D wafer level package (3D WLP). 3D SiPs that have been in mainstream manufacturing for some time and have a well-established infrastructure include stacked memory dies interconnected with wire bonds and package on package (PoP) configurations interconnected with wire bonds or flip chip technology. PoP is used for vertically integrating dispa |
https://en.wikipedia.org/wiki/Circle%20packing | In geometry, circle packing is the study of the arrangement of circles (of equal or varying sizes) on a given surface such that no overlapping occurs and so that no circle can be enlarged without creating an overlap. The associated packing density, , of an arrangement is the proportion of the surface covered by the circles. Generalisations can be made to higher dimensions – this is called sphere packing, which usually deals only with identical spheres.
The branch of mathematics generally known as "circle packing" is concerned with the geometry and combinatorics of packings of arbitrarily-sized circles: these give rise to discrete analogs of conformal mapping, Riemann surfaces and the like.
Densest packing
In the two-dimensional Euclidean plane, Joseph Louis Lagrange proved in 1773 that the highest-density lattice packing of circles is the hexagonal packing arrangement, in which the centres of the circles are arranged in a hexagonal lattice (staggered rows, like a honeycomb), and each circle is surrounded by six other circles. For circles of diameter and hexagons of side length , the hexagon area and the circle area are, respectively:
The area covered within each hexagon by circles is:
Finally, the packing density is:
In 1890, Axel Thue published a proof that this same density is optimal among all packings, not just lattice packings, but his proof was considered by some to be incomplete. The first rigorous proof is attributed to László Fejes Tóth in 1942.
While the circle has a relatively low maximum packing density, it does not have the lowest possible, even among centrally-symmetric convex shapes: the smoothed octagon has a packing density of about 0.902414, the smallest known for centrally-symmetric convex shapes and conjectured to be the smallest possible. (Packing densities of concave shapes such as star polygons can be arbitrarily small.)
Other packings
At the other extreme, Böröczky demonstrated that arbitrarily low density arrangements of rigidly pa |
https://en.wikipedia.org/wiki/SDS%20930 | The SDS 930 was a commercial 24-bit computer using bipolar junction transistors sold by Scientific Data Systems.
It was announced in December 1963, with first installations in June 1964.
Description
An SDS 930 system consists of at least three standard () cabinets, weighing about . It is composed of an arithmetic and logic unit, at least 8,192 words (24-bit + simple parity bit) magnetic-core memory, and the IO unit. Two's complement integer arithmetic is used. The machine has integer multiply and divide, but no floating-point hardware. An optional correlation and filtering unit (CFE) can be added, which is capable of very fast floating-point multiply-add operations (primarily intended for digital signal processing applications).
A free-standing console is also provided, which includes binary displays of the machine's registers and switches to boot and debug programs. User input is by a Teletype Model 35 ASR unit and a high-speed paper-tape reader (300 cps). Most systems include at least two magnetic-tape drives, operating at up to 75 in/s at 800 bpi. The normal variety of peripherals is also available, including magnetic-drum units, card readers and punches, and an extensive set of analog-digital/digital-analog conversion devices. A (vector mode) graphic display unit is also available, but it does not include a means of keyboard input.
The SDS 930 is a typical small- to medium-scale scientific computer of the 1960s. Speed is good for its cost, but with an integer add time of 3.5 microseconds, it is not in the same league as the scientific workhorses of the day (the CDC 6600, for example). A well equipped 930 can easily exceed 10 cabinets and require a climate-controlled room. The price of such a system in 1966 would be in the neighborhood of $500K.
Programming languages available include FORTRAN II, ALGOL 60, and the assembly language known as Meta-Symbol. The FORTRAN system is very compact, having been designed and implemented by Digitek for SDS to co |
https://en.wikipedia.org/wiki/Homoeriodictyol | Homoeriodictyol is a bitter-masking flavanone extracted from Yerba Santa (Eriodictyon californicum) a plant growing in America.
Homoeriodictyol (3`-methoxy-4`,5,7-trihydroxyflavanone) is one of the 4 flavanones identified by Symrise in this plant eliciting taste-modifying property: homoeriodictyol sodium salt, eriodictyol and sterubin. Homoeriodictyol Sodium salt elicited the most potent bitter-masking activity by reducing from 10 to 40% the bitterness of salicin, amarogentin, paracetamol and quinine. However no bitter-masking activity was detected with bitter linoleic acid emulsions. According to Symrise's scientists homoeriodictyol sodium salt seems to be a taste-modifier with large potential in food applications and pharmaceuticals.
Structural relatives investigation based on eriodictyol and homoeriodictyol, found 2,4-Dihydroxybenzoic acid vanillylamide to elicits bitter-masking activity. At 0.1g/L, this vanillin derivative, was able to reduce the bitterness of a 0.5g/L caffeine solution by about 30%. |
https://en.wikipedia.org/wiki/Longevity%20quotient | Longevity Quotient (LQ) is a simplified measure to enable normalized comparisons of various species' longevity. It shares some similarity with measures such as Intelligence Quotient. It originated with Steven N. Austad and Kathleen E Fischer's 1991 paper on mammalian aging.
The detailed description of LQ was originally defined as the ratio of Actual Lifespan divided by Predicted Lifespan obtained from the Nonflying Eutherans (NFE) regression relating observed lifespan and body mass relationship. This followed the work of John Prothero and Klaus Jurgens who strictly looked to related longevity and body mass. Austad spells out that "Excluding bats and marsupials mean LQ is 1.0 by definition"
Aging and longevity researchers utilize LQ with additional metrics such as maximum species life span (MLSP). Rochelle Buffenstein considers MLSP as an important species aging characteristic that can vary over a factor of 40,000 throughout the animal kingdom, and is related species increase in body size. Buffenstein identifies the Longevity Quotient as the ratio of actual MLSP to that predicted by body mass.
Recent LQ based research identified some bats are relatively much long-lived. Myotis brandtii is estimated to have an LQ of 8.
Common measures in Aging and Longevity research include Life-Span Variables
Mass, Maximum longevity, Predicted MLSP, Longevity quotient (Fisher Austad Formalism, Longevity quotient (Prothero Jugrens Formalism) and lifetime energy expenditure (LEE) (normalized using kilocaories/gram).
Theories of Longevity and LQ
Buffenstein describes the evolutionary theory of aging as a nonadaptive result of the declining power of natural selection allowing harmful genetic mutations may prevail suggesting that species living underground would have long life spans. Using LQ measures it appears that only the social subterranean species have high LQs. Additional discussions of longevity and MLSP abound
Comparative LQ
See also
Longevity
Siber |
https://en.wikipedia.org/wiki/David%20Syme | David Syme (2 October 1827 – 14 February 1908) was a Scottish-Australian newspaper proprietor of The Age and regarded as "the father of protection in Australia" who had immense influence in the Government of Victoria. His first biographer, Ambrose Pratt, declared Syme "could hate as few men can [and] loved power as few men ever loved it".
Early life and family
Syme was born at North Berwick in Scotland, the youngest of the seven children and fourth son of George Alexander Syme (18?–1845), a parish schoolmaster. Syme's wife, David's mother, was Jean née Mitchell. George Syme was a radical in church and state, his income was comfortable yet moderate, but it was stretched to provide for his large family and send three of his sons to universities (which he successfully did, while providing David with a relentlessly demanding education himself.) David Syme's childhood was one of study with little companionship with other boys of his own age. George Syme was not physically unkind to his sons, but Syme would write later: "It was difficult to understand my father's attitude to we boys. He had naturally a kind disposition; he was a devoted husband and no-one ever asked him for help that he did not freely give … but his affection for us never found expression in words".
Syme married Annabella Garnett-Johnson, of the Lancashire Garnett family of Waddow Hall, Clitheroe, England. Annabella was connected through her Garnett relations to William Garnett.
David Syme was 17 years old when his father died and he continued his classical studies with some doubt to his future. He had thoughts of qualifying for the ministry but revolted from the Calvinistic teaching of the day; his brothers George and Ebenezer had renounced the Church of Scotland.
Syme studied under James Morison at Kilmarnock for two years, attended some classes at Heidelberg and returned to Scotland obtaining a position about 1850 as a proofreader's assistant on a Glasgow newspaper. With low pay and little prospect |
https://en.wikipedia.org/wiki/Geodesic%20bicombing | In metric geometry, a geodesic bicombing distinguishes a class of geodesics of a metric space. The study of metric spaces with distinguished geodesics traces back to the work of the mathematician Herbert Busemann. The convention to call a collection of paths of a metric space bicombing is due to William Thurston. By imposing a weak global non-positive curvature condition on a geodesic bicombing several results from the theory of CAT(0) spaces and Banach space theory may be recovered in a more general setting.
Definition
Let be a metric space. A map is a geodesic bicombing if for all points the map is a unit speed metric geodesic from to , that is, , and for all real numbers .
Different classes of geodesic bicombings
A geodesic bicombing is:
reversible if for all and .
consistent if whenever and .
conical if for all and .
convex if is a convex function on for all .
Examples
Examples of metric spaces with a conical geodesic bicombing include:
Banach spaces.
CAT(0) spaces.
injective metric spaces.
the spaces where is the first Wasserstein distance.
any ultralimit or 1-Lipschitz retraction of the above.
Properties
Every consistent conical geodesic bicombing is convex.
Every convex geodesic bicombing is conical, but the reverse implication does not hold in general.
Every proper metric space with a conical geodesic bicombing admits a convex geodesic bicombing.
Every complete metric space with a conical geodesic bicombing admits a reversible conical geodesic bicombing. |
https://en.wikipedia.org/wiki/Signal-to-noise%20ratio | Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to noise power, often expressed in decibels. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise.
SNR is an important parameter that affects the performance and quality of systems that process or transmit signals, such as communication systems, audio systems, radar systems, imaging systems, and data acquisition systems. A high SNR means that the signal is clear and easy to detect or interpret, while a low SNR means that the signal is corrupted or obscured by noise and may be difficult to distinguish or recover. SNR can be improved by various methods, such as increasing the signal strength, reducing the noise level, filtering out unwanted noise, or using error correction techniques.
SNR also determines the maximum possible amount of data that can be transmitted reliably over a given channel, which depends on its bandwidth and SNR. This relationship is described by the Shannon–Hartley theorem, which is a fundamental law of information theory.
SNR can be calculated using different formulas depending on how the signal and noise are measured and defined. The most common way to express SNR is in decibels, which is a logarithmic scale that makes it easier to compare large or small values. Other definitions of SNR may use different factors or bases for the logarithm, depending on the context and application.
Definition
Signal-to-noise ratio is defined as the ratio of the power of a signal (meaningful input) to the power of background noise (meaningless or unwanted input):
where is average power. Both signal and noise power must be measured at the same or equivalent points in a system, and within the same system bandwidth.
Depending on whether the signal is a constant () or a random variable (), the signal-to-noise ratio for random noise becom |
https://en.wikipedia.org/wiki/List%20of%20trifoliate%20plants | This is an incomplete list of plants with trifoliate leaves. Trifoliate leaves (also known as trifoliolate or ternate leaves) are a leaf shape characterized by a leaf divided into three leaflets. Species which are known to be trifoliate are listed here. Genera which are characteristically trifoliate are also listed, with species underneath. Genera which are generally not trifoliate are not listed; only the trifoliate species are. Entries are currently listed in alphabetical order, but in the future it may be desirable to list them by families. It may also be desirable to include common names and references.
A
Acer cissifolium
Acer griseum
Acer mandshuricum
Acer maximowiczianum
Acer triflorum
Adenocarpus spp.
Aegle marmelos
Amphicarpaea spp.
Anagyris spp.
Anthyllis spp.
Aphyllodium spp.
Aquilegia grata
Aquilegia vulgaris
Argyrolobium spp.
Aspalathus spp.
B
Baptisia spp.
Baptisia australis
Bituminaria spp.
Bituminaria bituminosa
Bolusafra spp.
Bolusia spp.
Burtonia spp.
Butea spp.
C
Cajanus spp.
Calopogonium spp.
Canavalia spp.
Carmichaelia spp.
Christia spp.
Clematis aristata
Cleome serrulata
Clitoria spp.
Collaea spp.
Cologania spp.
Crotalaria spp.
Cyclopia spp.
Commiphora wightii
Cyamopsis spp.
Cytisus spp.
Cytisus scoparius
D
Desmodium spp.
Derris spp.
Dicentra cucullaria
Dichilus spp.
Dioclea spp.
Dolichos spp.
Dorycnium spp.
Dumasia spp.
E
Eleiotis spp.
Eriosema spp.
Erythrina spp.
Esenbeckia runyonii
F
Fagonia spp.Fagonia arabicaFagonia laevisFlemingia spp.ForsythiaFragaria chiloensisGGalactia spp.Genista spp.Genista monspessulanaGenista stenopetalaGlycine spp.
Goodia spp.
H–K
Helicotropis spp.
Hymenocarpos spp.
Hypocalyptus spp.
Indigofera spp.
Kennedia spp.
L
Lablab spp.
Laburnum
Lebeckia spp.
Lespedeza spp.
Lotononis spp.
Lotus spp.
M
Martiodendron spp.
Medicago spp.
Medicago truncatula
Melilotus spp.
Menyanthes trifoliata
Mucuna spp.
N
Nandina domestica
O
Ononis spp.
Otoptera spp.
Oxalis spp.
Oxalis corniculata
Oxalis tuberosa
P
Pachyrhizus spp. |
https://en.wikipedia.org/wiki/Social%20software%20engineering | Social software engineering (SSE) is a branch of software engineering that is concerned with the social aspects of software development and the developed software.
SSE focuses on the socialness of both software engineering and developed software. On the one hand, the consideration of social factors in software engineering activities, processes and CASE tools is deemed to be useful to improve the quality of both development process and produced software. Examples include the role of situational awareness and multi-cultural factors in collaborative software development. On the other hand, the dynamicity of the social contexts in which software could operate (e.g., in a cloud environment) calls for engineering social adaptability as a runtime iterative activity. Examples include approaches which enable software to gather users' quality feedback and use it to adapt autonomously or semi-autonomously.
SSE studies and builds socially-oriented tools to support collaboration and knowledge sharing in software engineering. SSE also investigates the adaptability of software to the dynamic social contexts in which it could operate and the involvement of clients and end-users in shaping software adaptation decisions at runtime. Social context includes norms, culture, roles and responsibilities, stakeholder's goals and interdependencies, end-users perception of the quality and appropriateness of each software behaviour, etc.
The participants of the 1st International Workshop on Social Software Engineering and Applications (SoSEA 2008) proposed the following characterization:
Community-centered: Software is produced and consumed by and/or for a community rather than focusing on individuals
Collaboration/collectiveness: Exploiting the collaborative and collective capacity of human beings
Companionship/relationship: Making explicit the various associations among people
Human/social activities: Software is designed consciously to support human activities and to address social p |
https://en.wikipedia.org/wiki/Signals%20intelligence%20by%20alliances%2C%20nations%20and%20industries | Signals intelligence by alliances, nations and industries comprises signals intelligence (SIGINT) gathering activities by national and non-national entities; these entities are commonly responsible for communications security (COMSEC) as well.
Many US and allied SIGINT activities are considered Sensitive Compartmented Information (SCI) and carry the special security marking "HANDLE THROUGH COMINT CHANNELS ONLY", which is abbreviated as the suffix CCO to the security classification. SECRET SIGINT material would be marked (S-CCO). For exceptionally sensitive TOP SECRET material, there may be an additional codeword, such as (TS-CCO-RABID).
UKUSA Agreement
SIGINT and security procedures are closely coordinated under what is called the UKUSA Community, which includes Australia, Canada, New Zealand, the United Kingdom, and the United States cooperating in a major SIGINT activity codenamed ECHELON. Of the UKUSA partners, NSA is the US element, Britain's is the Government Communications Headquarters (GCHQ), Canada's is the Communications Security Establishment and a few other small groups, Australia's is the Australian Signals Directorate, and New Zealand's is the Government Communications Security Bureau.
ECHELON
It is fair to say that there is something called ECHELON, and it is very large. There is no unclassified definition of what it really does, and there are conflicting unofficial reports on its capabilities and operations. Duncan Campbell is the source of much information, but many of his claims have been challenged by independent sources. It is a 2000 report, and his claims that NSA has published no details of its operations is not the case in 2007. Another extensive report is that of the European Parliament in 2001. Campbell himself refined his definitions a year later. His "strict" definition of ECHELON is that it is a satellite interception component of the partners of the UKUSA Agreement. Even among the UKUSA members, according to Campbell, there may be |
https://en.wikipedia.org/wiki/Pathologists%27%20assistant | A pathologists’ assistant (PA) is a physician extender whose expertise lies in gross examination of surgical specimens as well as performing forensic, medicolegal, and hospital autopsies.
General overview
PAs work under the indirect or direct supervision of a board certified anatomical pathologist, who ultimately renders a diagnosis based on the PA's detailed gross examination and/or tissue submission for microscopic evaluation. Requirements to become a pathologists' assistant include graduation from a National Accrediting Agency for Clinical Laboratory Sciences (NAACLS) accredited education program and successfully passing the American Society for Clinical Pathology (ASCP) certification exam, which is not legally required in most states. The credentialing is a certification from the ASCP. Some states such as Nevada and New York require a license. All pathologists' assistants are allied health workers who need to be CLIA 88 compliant to perform these high complexity tasks with indirect/direct supervision. With ongoing changes in health care, a growing elderly population, and a decreasing number of pathology residents, the PA is in high demand due to their high level of training and contribution to the overall efficiency of the pathology laboratory.
In addition to the major responsibilities outlined above, a pathologists' assistant may also perform the following tasks (for a complete list, refer to Article III, Section B of the AAPA Bylaws):
Frozen sectioning for intraoperative consultation
Preparing tissue samples for flow cytometry, immunohistochemical (IHC) stains, genetic testing, microbiology culturing, and for various other laboratory evaluations
Gross specimen photography
Training PA fellows, pathology residents, and other pathology lab personnel (as needed)
Fulfilling roles in managerial duties, instructional positions, and supervisory roles
Researching
While many PAs are employed in hospitals, they may also gain employment in private pathology la |
https://en.wikipedia.org/wiki/IMP-16 | The IMP-16, by National Semiconductor, was the first multi-chip 16-bit microprocessor, released in 1973. It consisted of five PMOS integrated circuits: four identical RALU chips, short for register and ALU, providing the data path, and one CROM, Control and ROM, providing control sequencing and microcode storage. The IMP-16 is a bit-slice processor; each RALU chip provides a 4-bit slice of the register and arithmetic that work in parallel to produce a 16-bit word length.
Each RALU chip stores its own 4 bits of the program counter, several registers, the ALU, a 16-word LIFO stack, and status flags. There were four 16-bit accumulators, two of which could be used as index registers. The instruction set architecture was similar to that of the Data General Nova. The chip set could be extended with the CROM chip (IMP-16A / 522D) that implemented 16-bit multiply and divide routines. The chipset was driven by a two-phase 715 kHz non-overlapping clock that had a +5 to -12 voltage swing. An integral part of the architecture was a 16-bit input mux that provided various condition bits from the ALUs such as zero, carry, overflow along with general purpose inputs.
The microprocessor was used in the IMP-16P microcomputer and Jacquard Systems' J100 but saw little other use. The IMP-16 was later superseded by the PACE and INS8900 single-chip 16-bit microprocessors, which had a similar architecture but were not binary compatible. It was also used in the Aston Martin Lagonda, thanks to National Semiconductor's chairman Peter Sprague being a major shareholder in Aston Martin at the time. |
https://en.wikipedia.org/wiki/The%20spider%20and%20the%20fly%20problem | The spider and the fly problem is a recreational mathematics problem with an unintuitive solution, asking for a shortest path or geodesic between two points on the surface of a cuboid. It was originally posed by Henry Dudeney.
Problem
In the typical version of the puzzle, an otherwise empty cuboid room 30 feet long, 12 feet wide and 12 feet high contains a spider and a fly. The spider is 1 foot below the ceiling and horizontally centred on one 12′×12′ wall. The fly is 1 foot above the floor and horizontally centred on the opposite wall. The problem is to find the minimum distance the spider must crawl along the walls, ceiling and/or floor to reach the fly, which remains stationary.
Solutions
A naive solution is for the spider to remain horizontally centred, and crawl up to the ceiling, across it and down to the fly, giving a distance of 42 feet. Instead, the shortest path, 40 feet long, spirals around five of the six faces of the cuboid. Alternatively, it can be described by unfolding the cuboid into a net and finding a shortest path (a line segment) on the resulting unfolded system of six rectangles in the plane. Different nets produce different segments with different lengths, and the question becomes one of finding a net whose segment length is minimum. Another path, of intermediate length , crosses diagonally through four faces instead of five.
For a room of length l, width w and height h, the spider a distance b below the ceiling, and the fly a distance a above the floor, length of the spiral path is while the naive solution has length . Depending on the dimensions of the cuboid, and on the initial positions of the spider and fly, one or another of these paths, or of four other paths, may be the optimal solution. However, there is no rectangular cuboid, and two points on the cuboid, for which the shortest path passes through all six faces of the cuboid.
A different lateral thinking solution, beyond the stated rules of the puzzle, involves the spider attach |
https://en.wikipedia.org/wiki/Foil%20%28architecture%29 | A foil is an architectural device based on a symmetrical rendering of leaf shapes, defined by overlapping circles of the same diameter that produce a series of cusps to make a lobe. Typically, the number of cusps can be three (trefoil), four (quatrefoil), five (cinquefoil), or a larger number (multifoil).
Foil motifs may be used as part of the heads and tracery of window lights, complete windows themselves, the underside of arches, in heraldry, within panelling, and as part of any decorative or ornament device. Foil types are commonly found in Gothic and Islamic architecture. |
https://en.wikipedia.org/wiki/Vanillylmandelic%20acid | Vanillylmandelic acid (VMA) is a chemical intermediate in the synthesis of artificial vanilla flavorings and is an end-stage metabolite of the catecholamines (epinephrine, and norepinephrine). It is produced via intermediary metabolites.
Chemical synthesis
VMA synthesis is the first step of a two-step process practiced by Rhodia since the 1970s to synthesize artificial vanilla. Specifically the reaction entails the condensation of guaiacol and glyoxylic acid in an ice cold, aqueous solution with sodium hydroxide.
Biological elimination
VMA is found in the urine, along with other catecholamine metabolites, including homovanillic acid (HVA), metanephrine, and normetanephrine. In timed urine tests the quantity excreted (usually per 24 hours) is assessed along with creatinine clearance, and the quantity of cortisols, catecholamines, and metanephrines excreted is also measured.
Clinical significance
Urinary VMA is elevated in patients with tumors that secrete catecholamines.
These urinalysis tests are used to diagnose an adrenal gland tumor called pheochromocytoma, a tumor of catecholamine-secreting chromaffin cells. These tests may also be used to diagnose neuroblastomas, and to monitor treatment of these conditions.
Norepinephrine is metabolised into normetanephrine and VMA. Norepinephrine is one of the hormones produced by the adrenal glands, which are found on top of the kidneys. These hormones are released into the blood during times of physical or emotional stress, which are factors that may skew the results of the test. |
https://en.wikipedia.org/wiki/Neuroepigenetics | Neuroepigenetics is the study of how epigenetic changes to genes affect the nervous system. These changes may effect underlying conditions such as addiction, cognition, and neurological development.
Mechanisms
Neuroepigenetic mechanisms regulate gene expression in the neuron. Often, these changes take place due to recurring stimuli. Neuroepigenetic mechanisms involve proteins or protein pathways that regulate gene expression by adding, editing or reading epigenetic marks such as methylation or acetylation. Some of these mechanisms include ATP-dependent chromatin remodeling, LINE1, and prion protein-based modifications. Other silencing mechanisms include the recruitment of specialized proteins that methylate DNA such that the core promoter element is inaccessible to transcription factors and RNA polymerase. As a result, transcription is no longer possible. One such protein pathway is the REST co-repressor complex pathway. There are also several non-coding RNAs that regulate neural function at the epigenetic level. These mechanisms, along with neural histone methylation, affect arrangement of synapses, neuroplasticity, and play a key role in learning and memory.
Methylation
DNA methyltransferases (DNMTs) are involved in regulation of the electrophysiological landscape of the brain through methylation of CpGs. Several studies have shown that inhibition or depletion of DNMT1 activity during neural maturation leads to hypomethylation of the neurons by removing the cell's ability to maintain methylation marks in the chromatin. This gradual loss of methylation marks leads to changes in the expression of crucial developmental genes that may be dosage sensitive, leading to neural degeneration. This was observed in the mature neurons in the dorsal portion of the mouse prosencephalon, where there was significantly greater amounts of neural degeneration and poor neural signaling in the absence of DNMT1. Despite poor survival rates amongst the DNMT1-depleted neurons, some |
https://en.wikipedia.org/wiki/MESM | MESM (Ukrainian: MEOM, Мала Електронна Обчислювальна Машина; Russian: МЭСМ, Малая Электронно-Счетная Машина; 'Small Electronic Calculating Machine') was the first universally programmable electronic computer in the Soviet Union. By some authors it was also depicted as the first one in continental Europe, even though the electromechanical computers Zuse Z4 and the Swedish BARK preceded it.
Overview
MESM was created by a team of scientists under the direction of Sergei Alekseyevich Lebedev from the Kiev Institute of Electrotechnology in the Ukrainian SSR, at Feofaniya (near Kyiv).
Initially, MESM was conceived as a layout or model of a Large Electronic Calculating Machine and letter "M" in the title meant "model" (prototype).
Work on the machine was research in nature, in order to experimentally test the principles of constructing universal digital computers. After the first successes and in order to meet the extensive governmental needs of computer technology, it was decided to complete the layout of a full-fledged machine capable of "solving real problems". MESM became operational in 1950. It had about 6,000 vacuum tubes and consumed 25 kW of power. It could perform approximately 3,000 operations per minute.
Creation and operation history
Principal computer architecture scheme was ready by the end of 1949. As well as a few schematic diagrams of an individual blocks.
In 1950 the computer was mounted in a two-story building of the former hostel of a convent in Feofania, where a psychiatric hospital was located before the second world war.
November 6, 1950: team performed the first test launch. Test task was:
January 4, 1951: First useful calculations performed. Calculate the factorial of a number, raise number in a power. Computer was shown to special commission of the USSR State Academy of Sciences. Team was led by Mstislav Keldysh.
December 25, 1951: Official government testing passed successfully. USSR Academy of Sciences and Mstislav Keldysh began regul |
https://en.wikipedia.org/wiki/Hamming%20space | In statistics and coding theory, a Hamming space (named after American mathematician Richard Hamming) is usually the set of all binary strings of length N. It is used in the theory of coding signals and transmission.
More generally, a Hamming space can be defined over any alphabet (set) Q as the set of words of a fixed length N with letters from Q. If Q is a finite field, then a Hamming space over Q is an N-dimensional vector space over Q. In the typical, binary case, the field is thus GF(2) (also denoted by Z2).
In coding theory, if Q has q elements, then any subset C (usually assumed of cardinality at least two) of the N-dimensional Hamming space over Q is called a q-ary code of length N; the elements of C are called codewords. In the case where C is a linear subspace of its Hamming space, it is called a linear code. A typical example of linear code is the Hamming code. Codes defined via a Hamming space necessarily have the same length for every codeword, so they are called block codes when it is necessary to distinguish them from variable-length codes that are defined by unique factorization on a monoid.
The Hamming distance endows a Hamming space with a metric, which is essential in defining basic notions of coding theory such as error detecting and error correcting codes.
Hamming spaces over non-field alphabets have also been considered, especially over finite rings (most notably over Z4) giving rise to modules instead of vector spaces and ring-linear codes (identified with submodules) instead of linear codes. The typical metric used in this case the Lee distance. There exist a Gray isometry between (i.e. GF(22m)) with the Hamming distance and (also denoted as GR(4,m)) with the Lee distance. |
https://en.wikipedia.org/wiki/Historia%20Plantarum%20%28Gessner%29 | Historia Plantarum (also called Conradi Gesneri Historia Plantarum) is an extensive botanical encyclopedia by the Swiss natural scientist, Conrad Gessner (1516 – 1565). Although compiled between 1555 and 1565, it was not published till 1754, since he died of the plague, prior to its completion. To complete the work, he amassed a collection of some 1,500 drawings of plants, most of which were his own work. The scale and scientific rigour were unusual for the time, and Gessner was a skilled artist, producing detailed drawings of specific plant parts that illustrated their characteristics, with extensive marginal notation discussing their growth form and habitation.
Reprints
Heinrich Zoller, Martin Steinmann (ed.): Conrad Gesner: Conradi Gesneri Historia plantarum. Gesamtausgabe. Urs-Graf-Verlag, Dietikon-Zürich 1987/1991 |
https://en.wikipedia.org/wiki/World%20Engineering%20Anthropometry%20Resource | World Engineering Anthropometry Resource (WEAR) is an international not-for-profit group that "provides a digital platform for sharing anthropometric data from around the world." It is registered in Europe but its members and partners are from all over the globe. It is made up of “a group of interested experts involved in the application of anthropometry data for design purposes.”
History
WEAR was first proposed in 2000 at an International Ergonomics Association (IEA) meeting. The strategic plan was drafted at the first working meeting in Paris, France in 2002. The first workshop was at the IEA in Seoul, Korea in 2003. Since then there have been working meetings and symposiums in USA, South Africa, Brazil, China, Australia, Canada, The Netherlands, Japan, New Zealand and Spain. WEAR gained support from the International Council for Science (ICSU): Committee on Data for Science and Technology (CODATA) in Berlin, Germany in 2004. Renewal of WEAR as the CODATA Task group for Anthropometric Data and Engineering was made in 2006, 2008, 2010, and 2012. The first WEAR short course was held in Paris in 2008. The website launch of the beta version of the online WEAR resource was at the IEA Congress in Beijing in 2009. The WEAR Conference met in Adelaide in 2011.
To create a searchable resource like WEAR is a mammoth undertaking. A standardization procedure was required to make more than 120 different anthropometric databases searchable and comparable. These databases often had different measurement collection methodologies, some described without much detail, and some in languages other than English. To achieve a workable standard the Anthropometric Measurement Interface (AMI) was created.
A WEAR/CODATA meeting was held on 18 November 2013 in Long Beach, California, USA. During the November 18th meeting those present discussed the new datasets added to the WEAR portal and organised for future uploads. WEAR representatives appointed new officers and made plans to atten |
https://en.wikipedia.org/wiki/Pac%C3%ADfica%20Fern%C3%A1ndez | Pacífica Fernández Oreamuno (August 23, 1828 – March 31, 1885) was the inaugural First Lady of Costa Rica and wife of President José María Castro Madriz. She was born in San José, Costa Rica on August 23, 1828 to her parents former Head of State Manuel Fernández Chacón and Dolores Oreamuno Muñoz de la Trinidad, and was sister of President Próspero Fernández Oreamuno.
She married José María Castro Madriz on June 29, 1843, who later became Head of State (1847–1848) and President of the Republic of Costa Rica (1848–1849 and 1866–1868). She still holds the title as the youngest First Lady or spouse of a Costa Rican head of state, as she was only 18 when her husband first gained power.
She suggested a red stripe be added to the flag of Costa Rica, based on the flag of France. The new flag was first sewn on November 12, 1848.
Fernández died in San José, Costa Rica on March 31, 1885. |
https://en.wikipedia.org/wiki/SRX%20expansion%20board | The SRX are a series of expansion boards produced by Roland Corporation. First introduced in 2000, they are small boards of electronic circuitry with 64MB ROMs containing patches (timbres) and rhythm sets (drum kits). They are used to expand certain models of Roland synthesizers, music workstations, keyboards, and sound modules.
Predecessor formats include the 15 SN-U110 PCM cards (U-110, U-20, U-220, D-70, CM-64 and CM-32P), 8 SL-JD80 PCM card/preset RAM card (JD-only) sets and 8 SO-PCM1 1-2 MB cards (both JD-800, JD-990, JV-80, JV-880, JV-90, JV-1000 and JV-1080), 22 SR-JV80 expansion boards (JD-990, JV-880, JV-1010, JV-1080, JV-2080, XV-3080, XV-5080, JV-80, JV-90, JV-1000, XP-30, XP-50, XP-60, XP-80, Fantom FA76, XV-88) and others.
Expansion boards
SRX-01 Dynamic Drum Kits
SRX-02 Concert Piano
SRX-03 Studio SRX
SRX-04 Symphonique Strings
SRX-05 Supreme Dance
SRX-06 Complete Orchestra
SRX-07 Ultimate Keys
SRX-08 Platinum Trax
SRX-09 World Collection
SRX-10 Big Brass Ensemble
SRX-11 Complete Piano
SRX-12 Classic EPs
SRX-96 World Collection and Legendary XP Essentials (special SRX board 2008)
SRX-97 Jon Lord's Rock Organ (special SRX board 2007)
SRX-98 Analog Essentials (special SRX board 2006)
SRX-99 Special Wave Expansion (promo released mid-2004)
Compatible hosts
According to Roland, the following products accept SRX expansion boards. The number in parentheses indicates the number of SRX boards each unit can accept.
Fantom workstation (2)
Fantom-S series (4)
Fantom-X series (4)
Fantom-XR rack unit (6)
Juno-G (1)
Juno-Stage (2)
RD-700, RD-700SX, and RD-700GX (2)
V-Combo (2)
G-70 (1)
E-80 (2)
Roland MC-909 (1)
SonicCell module (2)
XV-88 (2)
XV-5050, XV-3080, and XV-2020 modules (2)
XV-5080 (4)
V-Studio 700 (1)
Some later SRX cards, for example the SRX96 and 97 do not work in the XV3080 host synthesizer module nor in the XV-88 keyboard synthesizer. |
https://en.wikipedia.org/wiki/Willi%20Apel | Willi Apel (10 October 1893 – 14 March 1988) was a German-American musicologist and noted author of a number of books devoted to music. Among his most important publications are the 1944 edition of The Harvard Dictionary of Music and French Secular Music of the Late Fourteenth Century.
Life and career
Apel was born in Konitz, West Prussia, now Chojnice in Poland. He studied mathematics from 1912 to 1914, and then again after World War I from 1918 to 1922, in various universities in Weimar Germany. Throughout his studies, he had an interest in music and taught piano lessons. He then turned to music full-time, and essentially taught himself about musicology. He received his Ph.D. in 1936 in Berlin (with a dissertation on 15th and 16th century tonality) and immigrated to the USA the same year. He taught at Harvard from 1938 to 1942, but moved on to spend twenty years at Indiana University beginning in 1950. In 1972 he was awarded an honorary doctorate by the university.
Apel's work of the 1940s included books of broad scope, such as The Harvard Dictionary of Music (1944), which he edited, and Historical Anthology of Music (1947–1950, co-authored with Archibald Thompson Davison). His approach was to give as much attention to Medieval, Renaissance and world music as was given to familiar subjects such as Mozart and Beethoven; this influenced the higher music education in the USA. His book on the notation of early polyphonic music was also written in the 1940s, and still serves as one of the essential works on the subject.
In 1950 Apel's interest in early polyphonic notation resulted in an important edition, French Secular Music of the Late Fourteenth Century. In 1958 he published a large work on plainchant, which provided a comprehensive guide of the repertoire and its sources. In early 1960s he founded the Corpus of Early Keyboard Music (CEKM), a series of editions devoted to early keyboard music. Over the years, CEKM presented the music of less known composers such |
https://en.wikipedia.org/wiki/Air%20displacement%20pipette | Piston-driven air displacement pipettes are a type of micropipette, which are tools to handle volumes of liquid in the microliter scale. They are more commonly used in biology and biochemistry, and less commonly in chemistry; the equipment is susceptible to damage from many organic solvents.
Operation
These pipettes operate by piston-driven air displacement. A vacuum is generated by the vertical travel of a metallic or ceramic piston within an airtight sleeve. The upward movement of the piston, driven by the depression of the plunger, creates a vacuum in the space left vacant by the piston. To fill the vacuum, air from the tip rises, which is then replaced by the liquid that is drawn up into the tip and thus available for transport and dispensing elsewhere.
Sterile technique prevents liquid from coming into contact with the pipette itself. Instead, the liquid is drawn into and dispensed from a disposable pipette tip that is discarded after transferring fluid and a new pipette tip is used for the next transfer. Depressing the tip ejector button removes the tip, that is cast off without being handled by the operator and disposed of safely in an appropriate container. This also prevents contamination of or damage to the calibrated measurement mechanism by the substances being measured.
The plunger is depressed to both draw up and dispense the liquid. Normal operation consists of depressing the plunger button to the first stop while the pipette is held in the air. The tip is then submerged in the liquid to be transported and the plunger is released in a slow and even manner. This draws the liquid up into the tip. The instrument is then moved to the desired dispensing location. The plunger is again depressed to the first stop, and then to the second stop, or 'blowout', position. This action will fully evacuate the tip and dispense the liquid. In an adjustable pipette, the volume of liquid contained in the tip is variable; it can be changed via a dial or other mechani |
https://en.wikipedia.org/wiki/Stone%20algebra | In mathematics, a Stone algebra, or Stone lattice, is a pseudo-complemented distributive lattice such that a* ∨ a** = 1. They were introduced by and named after Marshall Harvey Stone.
Boolean algebras are Stone algebras, and Stone algebras are Ockham algebras.
Examples:
The open-set lattice of an extremally disconnected space is a Stone algebra.
The lattice of positive divisors of a given positive integer is a Stone lattice.
See also
De Morgan algebra
Heyting algebra |
https://en.wikipedia.org/wiki/Stopband | A stopband is a band of frequencies, between specified limits, through which a circuit, such as a filter or telephone circuit, does not allow signals to pass, or the attenuation is above the required stopband attenuation level. Depending on application, the required attenuation within the stopband may typically be a value between 20 and 120 dB higher than the nominal passband attenuation, which often is 0 dB.
The lower and upper limiting frequencies, also denoted lower and upper stopband corner frequencies, are the frequencies where the stopband and the transition bands meet in a filter specification. The stopband of a low-pass filter is the frequencies from the stopband corner frequency (which is slightly higher than the passband 3 dB cut-off frequency) up to the infinite frequency. The stopband of a high-pass filter consists of the frequencies from 0 hertz to a stopband corner frequency (slightly lower than the passband cut-off frequency).
A band-stop filter has one stopband, specified by two non-zero and non-infinite corner frequencies. The difference between the limits in the band-stop filter is the stopband bandwidth, which usually is expressed in hertz.
A bandpass filter typically has two stopbands. The shape factor of a bandpass filter is the relationship between the 3 dB bandwidth, and the difference between the stopband limits.
See also
Passband
Band-stop filter
Band gap in solid state physics
Band rejection |
https://en.wikipedia.org/wiki/Correlation%20%28projective%20geometry%29 | In projective geometry, a correlation is a transformation of a d-dimensional projective space that maps subspaces of dimension k to subspaces of dimension , reversing inclusion and preserving incidence. Correlations are also called reciprocities or reciprocal transformations.
In two dimensions
In the real projective plane, points and lines are dual to each other. As expressed by Coxeter,
A correlation is a point-to-line and a line-to-point transformation that preserves the relation of incidence in accordance with the principle of duality. Thus it transforms ranges into pencils, pencils into ranges, quadrangles into quadrilaterals, and so on.
Given a line m and P a point not on m, an elementary correlation is obtained as follows: for every Q on m form the line PQ. The inverse correlation starts with the pencil on P: for any line q in this pencil take the point . The composition of two correlations that share the same pencil is a perspectivity.
In three dimensions
In a 3-dimensional projective space a correlation maps a point to a plane. As stated in one textbook:
If κ is such a correlation, every point P is transformed by it into a plane , and conversely, every point P arises from a unique plane π′ by the inverse transformation κ−1.
Three-dimensional correlations also transform lines into lines, so they may be considered to be collineations of the two spaces.
In higher dimensions
In general n-dimensional projective space, a correlation takes a point to a hyperplane. This context was described by Paul Yale:
A correlation of the projective space P(V) is an inclusion-reversing permutation of the proper subspaces of P(V).
He proves a theorem stating that a correlation φ interchanges joins and intersections, and for any projective subspace W of P(V), the dimension of the image of W under φ is , where n is the dimension of the vector space V used to produce the projective space P(V).
Existence of correlations
Correlations can exist only if the space is self-dual. For |
https://en.wikipedia.org/wiki/Hydraulic%20Launch%20Assist | Hydraulic Launch Assist (HLA) is the name of a hydraulic hybrid regenerative braking system for land vehicles produced by the Eaton Corporation.
Background
The HLA system recycles energy by converting kinetic energy into potential energy during deceleration via hydraulics, storing the energy at high pressure in an accumulator filled with nitrogen gas. The energy is then returned to the vehicle during subsequent acceleration thereby reducing the amount of work done by the internal combustion engine. This system provides considerable increase in vehicle productivity while reducing fuel consumption in stop-and-go use profiles like refuse vehicles and other heavy duty vehicles.
Parallel vs. series hybrids
The HLA system is called a parallel hydraulic hybrid. In parallel systems the original vehicle drive-line remains, allowing the vehicle to operate normally when the HLA system is disengaged. When the HLA is engaged, energy is captured during deceleration and released during acceleration, in contrast to series hydraulic hybrid systems which replace the entire traditional drive-line to provide power transmission in addition to regenerative braking.
Hydraulic vs. electric hybrids
Hydraulic hybrids are said to be power dense, while electric hybrids are energy dense. This means that electric hybrids, while able to deliver large amounts of energy over long periods of time are limited by the rate at which the chemical energy in the batteries is converted to mechanical energy and . This is largely governed by reaction rates in the battery and current ratings of associated components. Hydraulic hybrids on the other hand are capable of transferring energy at a much higher rate, but are limited by the amount of energy that can be stored. For this reason, hydraulic hybrids lend themselves well to stop-and-go applications and heavy vehicles.
Applications
Concept vehicles
Ford Motor Company included the HLA system in their 2002 F-350 Tonka truck concept vehicle, reported |
https://en.wikipedia.org/wiki/Zimmert%20set | In mathematics, a Zimmert set is a set of positive integers associated with the structure of quotients of hyperbolic three-space by a Bianchi group.
Definition
Fix an integer d and let D be the discriminant of the imaginary quadratic field Q(√-d). The Zimmert set Z(d) is the set of positive integers n such that 4n2 < -D-3 and n ≠ 2; D is a quadratic non-residue of all odd primes in d; n is odd if D is not congruent to 5 modulo 8. The cardinality of Z(d) may be denoted by z(d).
Property
For all but a finite number of d we have z(d) > 1: indeed this is true for all d > 10476.
Application
Let Γd denote the Bianchi group PSL(2,Od), where Od is the ring of integers of. As a subgroup of PSL(2,C), there is an action of Γd on hyperbolic 3-space H3, with a fundamental domain. It is a theorem that there are only finitely many values of d for which Γd can contain an arithmetic subgroup G for which the quotient H3/G is a link complement. Zimmert sets are used to obtain results in this direction: z(d) is a lower bound for the rank of the largest free quotient of Γd and so the result above implies that almost all Bianchi groups have non-cyclic free quotients. |
https://en.wikipedia.org/wiki/Displacement%20activity | Displacement activities occur when an animal experiences high motivation for two or more conflicting behaviours: the resulting displacement activity is usually unrelated to the competing motivations. Birds, for example, may peck at grass when uncertain whether to attack or flee from an opponent; similarly, a human may scratch their head when they do not know which of two options to choose. Displacement activities may also occur when animals are prevented from performing a single behaviour for which they are highly motivated. Displacement activities often involve actions which bring comfort to the animal such as scratching, preening, drinking or feeding.
In the assessment of animal welfare, displacement activities are sometimes used as evidence that an animal is highly motivated to perform a behaviour that the environment prevents. One example is that when hungry hens are trained to eat from a particular food dispenser and then find the dispenser blocked, they often begin to pace and preen themselves vigorously. These actions have been interpreted as displacement activities, and similar pacing and preening can be used as evidence of frustration in other situations.
Psychiatrist and primatologist Alfonso Troisi proposed that displacement activities can be used as non-invasive measures of stress in primates. He noted that various non-human primates perform self-directed activities such as grooming and scratching in situations likely to involve anxiety and uncertainty, and that these behaviours are increased by anxiogenic (anxiety-producing) drugs and reduced by anxiolytic (anxiety-reducing) drugs. In humans, he noted that similar self-directed behaviour, together with aimless manipulation of objects (chewing pens, twisting rings), can be used as indicators of "stressful stimuli and may reflect an emotional condition of negative affect".
More recently the term 'displacement activity' has been widely adopted to describe a form of procrastination. It is commonly us |
https://en.wikipedia.org/wiki/Deterministic%20context-free%20language | In formal language theory, deterministic context-free languages (DCFL) are a proper subset of context-free languages. They are the context-free languages that can be accepted by a deterministic pushdown automaton. DCFLs are always unambiguous, meaning that they admit an unambiguous grammar. There are non-deterministic unambiguous CFLs, so DCFLs form a proper subset of unambiguous CFLs.
DCFLs are of great practical interest, as they can be parsed in linear time, and various restricted forms of DCFGs admit simple practical parsers. They are thus widely used throughout computer science.
Description
The notion of the DCFL is closely related to the deterministic pushdown automaton (DPDA). It is where the language power of pushdown automata is reduced to if we make them deterministic; the pushdown automata become unable to choose between different state-transition alternatives and as a consequence cannot recognize all context-free languages. Unambiguous grammars do not always generate a DCFL. For example, the language of even-length palindromes on the alphabet of 0 and 1 has the unambiguous context-free grammar S → 0S0 | 1S1 | ε. An arbitrary string of this language cannot be parsed without reading all its letters first, which means that a pushdown automaton has to try alternative state transitions to accommodate for the different possible lengths of a semi-parsed string.
Properties
Deterministic context-free languages can be recognized by a deterministic Turing machine in polynomial time and O(log2 n) space; as a corollary, DCFL is a subset of the complexity class SC.
The set of deterministic context-free languages is closed under the following operations:
complement
inverse homomorphism
right quotient with a regular language
pre: pre() is the subset of all strings having a proper prefix that also belongs to .
min: min() is the subset of all strings that do not have a proper prefix in .
max: max() is the subset of all strings that are not the prefix of a longer |
https://en.wikipedia.org/wiki/End-plate%20potential | End plate potentials (EPPs) are the voltages which cause depolarization of skeletal muscle fibers caused by neurotransmitters binding to the postsynaptic membrane in the neuromuscular junction. They are called "end plates" because the postsynaptic terminals of muscle fibers have a large, saucer-like appearance. When an action potential reaches the axon terminal of a motor neuron, vesicles carrying neurotransmitters (mostly acetylcholine) are exocytosed and the contents are released into the neuromuscular junction. These neurotransmitters bind to receptors on the postsynaptic membrane and lead to its depolarization. In the absence of an action potential, acetylcholine vesicles spontaneously leak into the neuromuscular junction and cause very small depolarizations in the postsynaptic membrane. This small response (~0.4mV) is called a miniature end plate potential (MEPP) and is generated by one acetylcholine-containing vesicle. It represents the smallest possible depolarization which can be induced in a muscle.
Neuromuscular junction
The neuromuscular junction is the synapse that is formed between an alpha motor neuron (α-MN) and the skeletal muscle fiber. In order for a muscle to contract, an action potential is first propagated down a nerve until it reaches the axon terminal of the motor neuron. The motor neuron then innervates the muscle fibers to contraction by causing an action potential on the postsynaptic membrane of the neuromuscular junction.
Acetylcholine
End plate potentials are produced almost entirely by the neurotransmitter acetylcholine in skeletal muscle. Acetylcholine is the second most important excitatory neurotransmitter in the body following glutamate. It controls the somatosensory system which includes the senses of touch, vision, and hearing. It was the first neurotransmitter to be identified in 1914 by Henry Dale. Acetylcholine is synthesized in the cytoplasm of the neuron from choline and acetyl-CoA. Choline acetyltransferase is |
https://en.wikipedia.org/wiki/Day%20convolution | In mathematics, specifically in category theory, Day convolution is an operation on functors that can be seen as a categorified version of function convolution. It was first introduced by Brian Day in 1970 in the general context of enriched functor categories. Day convolution acts as a tensor product for a monoidal category structure on the category of functors over some monoidal category .
Definition
Let be a monoidal category enriched over a symmetric monoidal closed category . Given two functors , we define their Day convolution as the following coend.
If is symmetric, then is also symmetric. We can show this defines an associative monoidal product. |
https://en.wikipedia.org/wiki/Safety%20sign | Safety signs are a type of sign designed to warn of hazards, indicate mandatory actions or required use of Personal protective equipment, prohibit actions or objects, identify the location of firefighting or safety equipment, or marking of exit routes.
In addition to being encountered in industrial facilities; safety signs are also found in public places and communities, at electrical pylons and Electrical substations, cliffs, beaches, bodies of water, on motorized equipment, such as lawn mowers, and areas closed for construction or demolition.
History
In the United States
Early signs and ASA Z35.1
One of the earliest attempts to standardize safety signage in the United States was the
1914 Universal Safety Standards. The signs were fairly simple in nature, consisting of an illuminated board with "DANGER" in white letters on a red field. An arrow was added to draw attention to the danger if it was less obvious. Signs indicating exits, first aid kits consisted of a green board, with white letters. The goal with signs was to inform briefly.
The next major standards to follow were ASA Z35.1 in 1941, which later revised in 1967 and 1968.
The Occupational Safety and Health Administration devised their requirements from ASA Z35.1-1968 in the development of their rules, OSHA §1910.145 for the usage of safety signage
ANSI Z535
In the 1980s, American National Standards Institute formed a committee to update the Z53 and Z35 standards. In 1991, ANSI Z535 was introduced, which was intended to modernize signage through increased use of symbols, the introduction of a new header, 'Warning' and requiring that wording not just state the hazard, but also the possible harm the hazard could inflict and how to avoid the hazard. Until 2013, OSHA regulations technically required usage of signage prescribed in OSHA §1910.145, based on the standard ASA Z35.1-1968. Regulation changes and clarification of the law now allow usage of signs complying with either OSHA §1910.145 or ANSI Z535 |
https://en.wikipedia.org/wiki/Sylvester%20equation | In mathematics, in the field of control theory, a Sylvester equation is a matrix equation of the form:
It is named after English mathematician James Joseph Sylvester. Then given matrices A, B, and C, the problem is to find the possible matrices X that obey this equation. All matrices are assumed to have coefficients in the complex numbers. For the equation to make sense, the matrices must have appropriate sizes, for example they could all be square matrices of the same size. But more generally, A and B must be square matrices of sizes n and m respectively, and then X and C both have n rows and m columns.
A Sylvester equation has a unique solution for X exactly when there are no common eigenvalues of A and −B.
More generally, the equation AX + XB = C has been considered as an equation of bounded operators on a (possibly infinite-dimensional) Banach space. In this case, the condition for the uniqueness of a solution X is almost the same: There exists a unique solution X exactly when the spectra of A and −B are disjoint.
Existence and uniqueness of the solutions
Using the Kronecker product notation and the vectorization operator , we can rewrite Sylvester's equation in the form
where is of dimension , is of dimension , of dimension and is the identity matrix. In this form, the equation can be seen as a linear system of dimension .
Theorem.
Given matrices and , the Sylvester equation has a unique solution for any if and only if and do not share any eigenvalue.
Proof.
The equation is a linear system with unknowns and the same number of equations. Hence it is uniquely solvable for any given if and only if the homogeneous equation
admits only the trivial solution .
(i) Assume that and do not share any eigenvalue. Let be a solution to the abovementioned homogeneous equation. Then , which can be lifted to
for each
by mathematical induction. Consequently,
for any polynomial . In particular, let be the characteristic polynomial of . Then
d |
https://en.wikipedia.org/wiki/Cunningham%20function | In statistics, the Cunningham function or Pearson–Cunningham function ωm,n(x) is a generalisation of a special function introduced by and studied in the form here by . It can be defined in terms of the confluent hypergeometric function U, by
The function was studied by Cunningham in the context of a multivariate generalisation of the Edgeworth expansion for approximating a probability density function based on its (joint) moments. In a more general context, the function is related to the solution of the constant-coefficient diffusion equation, in one or more dimensions.
The function ωm,n(x) is a solution of the differential equation for X:
The special function studied by Pearson is given, in his notation by,
Notes |
https://en.wikipedia.org/wiki/Marine%20geology | Marine geology or geological oceanography is the study of the history and structure of the ocean floor. It involves geophysical, geochemical, sedimentological and paleontological investigations of the ocean floor and coastal zone. Marine geology has strong ties to geophysics and to physical oceanography.
Marine geological studies were of extreme importance in providing the critical evidence for sea floor spreading and plate tectonics in the years following World War II. The deep ocean floor is the last essentially unexplored frontier and detailed mapping in support of both military (submarine) objectives and economic (petroleum and metal mining) objectives drives the research.–
Overview
The Ring of Fire around the Pacific Ocean with its attendant intense volcanism and seismic activity poses a major threat for disastrous earthquakes, tsunamis and volcanic eruptions. Any early warning systems for these disastrous events will require a more detailed understanding of marine geology of coastal and island arc environments.
The study of littoral and deep sea sedimentation and the precipitation and dissolution rates of calcium carbonate in various marine environments has important implications for global climate change.
The discovery and continued study of mid-ocean rift zone volcanism and hydrothermal vents, first in the Red Sea and later along the East Pacific Rise and the Mid-Atlantic Ridge systems were and continue to be important areas of marine geological research. The extremophile organisms discovered living within and adjacent to those hydrothermal systems have had a pronounced impact on our understanding of life on Earth and potentially the origin of life within such an environment.
Oceanic trenches are hemispheric-scale long but narrow topographic depressions of the sea floor. They also are the deepest parts of the ocean floor.
Mariana Trench
The Mariana Trench (or Marianas Trench) is the deepest known submarine trench, and the deepest location in the Eart |
https://en.wikipedia.org/wiki/Isothermal%20titration%20calorimetry | In chemical thermodynamics, isothermal titration calorimetry (ITC) is a physical technique used to determine the thermodynamic parameters of interactions in solution. It is most often used to study the binding of small molecules (such as medicinal compounds) to larger macromolecules (proteins, DNA etc.) in a label-free environment. It consists of two cells which are enclosed in an adiabatic jacket. The compounds to be studied are placed in the sample cell, while the other cell, the reference cell, is used as a control and contains the buffer in which the sample is dissolved.
The technique was developed by H. D. Johnston in 1968 as a part of his Ph.D. dissertation at Brigham Young University, and was considered niche until introduced commercially by MicroCal Inc. in 1988. Compared to other calorimeters, ITC has an advantage in not requiring any correctors since there was no heat exchange between the system and the environment.
Thermodynamic measurements
ITC is a quantitative technique that can determine the binding affinity (), enthalpy changes (), and binding stoichiometry () of the interaction between two or more molecules in solution. This is achieved from integrating the area of the injection peaks and plotting the individual values by molar ratio of the binding event versus \Delta H (kcal/mol). From these initial measurements, Gibbs free energy changes () and entropy changes () can be determined using the relationship:
(where is the gas constant and is the absolute temperature).
For accurate measurements of binding affinity, the curve of the thermogram must be sigmoidal. The profile of the curve is determined by the c-value, which is calculated using the equation:
where is the stoichiometry of the binding, is the association constant and is the concentration of the molecule in the cell. The c-value must fall between 1 and 1000, ideally between 10 and 100. In terms of binding affinity, it would be approximately from ~ within the limit range.
In |
https://en.wikipedia.org/wiki/Endospore | An endospore is a dormant, tough, and non-reproductive structure produced by some bacteria in the phylum Bacillota. The name "endospore" is suggestive of a spore or seed-like form (endo means 'within'), but it is not a true spore (i.e., not an offspring). It is a stripped-down, dormant form to which the bacterium can reduce itself. Endospore formation is usually triggered by a lack of nutrients, and usually occurs in gram-positive bacteria. In endospore formation, the bacterium divides within its cell wall, and one side then engulfs the other. Endospores enable bacteria to lie dormant for extended periods, even centuries. There are many reports of spores remaining viable over 10,000 years, and revival of spores millions of years old has been claimed. There is one report of viable spores of Bacillus marismortui in salt crystals approximately 250 million years old. When the environment becomes more favorable, the endospore can reactivate itself into a vegetative state. Most types of bacteria cannot change to the endospore form. Examples of bacterial species that can form endospores include Bacillus cereus, Bacillus anthracis, Bacillus thuringiensis, Clostridium botulinum, and Clostridium tetani. Endospore formation is not found among Archaea.
The endospore consists of the bacterium's DNA, ribosomes and large amounts of dipicolinic acid. Dipicolinic acid is a spore-specific chemical that appears to help in the ability for endospores to maintain dormancy. This chemical accounts for up to 10% of the spore's dry weight.
Endospores can survive without nutrients. They are resistant to ultraviolet radiation, desiccation, high temperature, extreme freezing and chemical disinfectants. Thermo-resistant endospores were first hypothesized by Ferdinand Cohn after studying Bacillus subtilis growth on cheese after boiling the cheese. His notion of spores being the reproductive mechanism for the growth was a large blow to the previous suggestions of spontaneous generation. Astroph |
https://en.wikipedia.org/wiki/Protocol%20pipelining | Protocol pipelining is a technique in which multiple requests are written out to a single socket without waiting for the corresponding responses. Pipelining can be used in various application layer network protocols, like HTTP/1.1, SMTP and FTP.
The pipelining of requests results in a dramatic improvement in protocol performance, especially over high latency connections (such as satellite Internet connections). Pipelining reduces waiting time of a process.
See also
HTTP pipelining |
https://en.wikipedia.org/wiki/Thigmomorphogenesis | Thigmomorphogenesis (from Ancient Greek θιγγάνω (thingánō) to touch, μορφή (morphê) shape, and γένεσις (génesis) creation) is the response by plants to mechanical sensation (touch) by altering their growth patterns. In the wild, these patterns can be evinced by wind, raindrops, and rubbing by passing animals.
Botanists have long known that plants grown in a greenhouse tend to be taller and more spindly than plants grown outside. M.J. Jaffe discovered in the 1970s that regular rubbing or bending of stems inhibits their elongation and stimulates their radial expansion, resulting in shorter, stockier plants.
Growth responses are caused by changes in gene expression. This is likely related to the calcium-binding protein calmodulin, suggesting Ca2+ involvement in mediating growth responses.
Thigmomorphogenesis has also been determined to be a form of phenotypic plasticity in plants, potentially inducing different adaptive and stress responses in a variety of species. |
https://en.wikipedia.org/wiki/Field-replaceable%20unit | A field-replaceable unit (FRU) is a printed circuit board, part, or assembly that can be quickly and easily removed from a computer or other piece of electronic equipment, and replaced by the user or a technician without having to send the entire product or system to a repair facility. FRUs allow a technician lacking in-depth product knowledge to isolate faults and replace faulty components. The granularity of FRUs in a system impacts total cost of ownership and support, including the costs of stocking spare parts, where spares are deployed to meet repair time goals, how diagnostic tools are designed and implemented, levels of training for field personnel, whether end-users can do their own FRU replacement, etc.
Other equipment
FRUs are not strictly confined to computers but are also part of many high-end, lower-volume consumer and commercial products. For example, in military aviation, electronic components of line-replaceable units, typically known as shop-replaceable units (SRUs), are repaired at field-service backshops, usually by a "remove and replace" repair procedure, with specialized repair performed at centralized depot or by the OEM.
History
Many vacuum tube computers had FRUs:
Pluggable units containing one or more vacuum tubes and various passive components
Most transistorized and integrated circuit-based computers had FRUs:
Computer modules, circuit boards containing discrete transistors and various passive components. Examples:
IBM SMS cards
DEC System Building Blocks cards
DEC Flip-Chip cards
Circuit boards containing monolithic ICs and/or hybrid ICs, such as IBM SLT cards.
Vacuum tubes themselves are usually FRUs.
For a short period starting in the late 1960s, some television set manufacturers made solid-state televisions with FRUs instead of a single board attached to the chassis. However modern televisions put all the electronics on one large board to reduce manufacturing costs.
Trends
As the sophistication and complexity of multi-replaceable |
https://en.wikipedia.org/wiki/Certified%20health%20physicist | Certified Health Physicist is an official title granted by the American Board of Health Physics, the certification board for health physicists in the United States. A Certified Health Physicist is designated by the letters CHP or DABHP (Diplomate of the American Board of Health Physics) after his or her name.
A certification by the ABHP is not a license to practice and does not confer any legal qualification to practice health physics. However, the certification is well respected and indicates a high level of achievement by those who obtain it.
Certified Health Physicists are plenary or emeritus members of the American Academy of Health Physics (AAHP). In 2019, the AAHP web site listed over 1600 plenary and emeritus members.
Professional responsibilities
A person certified as a health physicist has a responsibility to uphold the professional integrity associated with the certification to promote the practice and science of radiation safety. It is expected that such a person will always give health physics information based on the highest standards of science and professional ethics. A certified individual has a responsibility to remain professionally active in the health physics field and remain technically competent in the scientific, technical and regulatory developments in the field.
General requirements required to receive the certification
The requirements for prospective candidates for certification are
Academics. At least a bachelor's degree from an accredited college or university in physical sciences, engineering, or in a biological science, with a minimum of 20 semester hours in physical science.
Experience. At least six years of professional experience in health physics. By permission of the Board, advanced degrees may substitute for one year (master's degree) or two years (doctorate) of the required experience.
References. A reference from the immediate supervisor and from at least two other individuals, including one from a currently certified Hea |
https://en.wikipedia.org/wiki/Hans%20Samelson | Hans Samelson (3 March 1916 – 22 September 2005) was a German-American mathematician who worked in differential geometry, topology and the theory of Lie groups and Lie algebras—important in describing the symmetry of analytical structures.
Career and personal life
The eldest of three sons, Samelson was born on 3 March 1916, in Strassburg, Germany (now Strasbourg, France). His brother Klaus later became a mathematician and early computer science pioneer in Germany. His parents—one of Protestant and one of Jewish background—were both pediatricians. He spent most of his youth in Breslau, Silesia, Germany (now Wrocław, Poland), and began his advanced mathematical education there, at the University of Breslau. His family helped him leave Nazi Germany in 1936 for Zurich, Switzerland, where he studied with the geometer Heinz Hopf and received his doctorate in 1940 from the Swiss Federal Institute of Technology.
In 1941, he accepted a position at the Institute for Advanced Study in Princeton and immigrated to the United States; he arrived by ship six months before the United States entered World War II and acquired U.S. citizenship several years later. After leaving Princeton, he held faculty positions at the University of Wyoming (1942–1943), Syracuse University (1943–1946) and the University of Michigan (1946–1960) before coming to Stanford in 1960. He was recognized with the Dean's Award for Distinguished Teaching in 1977. He served as chair of the Mathematics Department from 1979 to 1982.
Though he became emeritus in 1986, he remained professionally active throughout his retirement, publishing articles on both contemporary and historical mathematical topics. One solved an architectural puzzle associated with the construction of the Brunelleschi Dome in Florence, Italy.
He was active in the Palo Alto Friends Meeting (Quakers) during his retirement, serving as treasurer for several years.
See also
Bott–Samelson variety
Publications
Notes on Lie Algebras
|
https://en.wikipedia.org/wiki/Magnetotaxis | Magnetotaxis is a process implemented by a diverse group of Gram-negative bacteria that involves orienting and coordinating movement in response to Earth's magnetic field. This process is mainly carried out by microaerophilic and anaerobic bacteria found in aquatic environments such as salt marshes, seawater, and freshwater lakes. By sensing the magnetic field, the bacteria are able to orient themselves towards environments with more favorable oxygen concentrations. This orientation towards more favorable oxygen concentrations allows the bacteria to reach these environments faster as opposed to random movement through Brownian motion.
Overview
Magnetic bacteria (e.g. Magnetospirillum magnetotacticum) contain internal structures known as magnetosomes which are responsible for the process of magnetotaxis. After orienting to the magnetic field using the magnetosomes, the bacteria use flagella to swim along the magnetic field, towards the more favorable environment. Magnetotaxis has no impact on the average speed of the bacteria. However, magnetotaxis allows bacteria to guide their otherwise random movement. This process is similar in practice to aerotaxis, but governed by magnetic fields instead of oxygen concentrations. Magnetotaxis and aerotaxis often function together, as bacteria can use both magnetotactic and aerotactic systems to find proper oxygen concentrations. This is referred to as magneto-aerotaxis. By orienting towards the Earth's poles, marine bacteria are able to direct their movement downwards, towards the anaerobic/micro aerobic sediments. This allows bacteria to change metabolic environments, which can enable chemical cycles.
Magnetosomes
Magnetosomes contain crystals - often magnetite (Fe3O4). Some extremophile bacteria from sulfurous environments have been isolated with greigite (an iron-sulfide compound Fe3S4). Some magnetotactic bacteria also contain pyrite (FeS2) crystals, possibly as a transformation product of greigite. These crystals are c |
https://en.wikipedia.org/wiki/SYCL | SYCL is a higher-level programming model to improve programming productivity on various hardware accelerators. It is a single-source embedded domain-specific language (eDSL) based on pure C++17. It is a standard developed by Khronos Group, announced in March 2014.
Origin of the name
SYCL (pronounced ‘sickle’) originally stood for SYstem-wide Compute Language, but since 2020 SYCL developers have stated that SYCL is a name and have made clear that it is no longer an acronym and contains no reference to OpenCL.
Purpose
SYCL is a royalty-free, cross-platform abstraction layer that builds on the underlying concepts, portability and efficiency inspired by OpenCL that enables code for heterogeneous processors to be written in a “single-source” style using completely standard C++. SYCL enables single-source development where C++ template functions can contain both host and device code to construct complex algorithms that use hardware accelerators, and then re-use them throughout their source code on different types of data.
While the SYCL standard started as the higher-level programming model sub-group of the OpenCL working group and was originally developed for use with OpenCL and SPIR, SYCL is a Khronos Group workgroup independent from the OpenCL working group since September 20, 2019 and starting with SYCL 2020, SYCL has been generalized as a more general heterogeneous framework able to target other systems. This is now possible with the concept of a generic backend to target any acceleration API while enabling full interoperability with the target API, like using existing native libraries to reach the maximum performance along with simplifying the programming effort. For example, the Open SYCL implementation targets ROCm and CUDA via AMD's cross-vendor HIP.
Versions
SYCL was introduced at GDC in March 2014 with
provisional version 1.2, then the SYCL 1.2 final version was
introduced at IWOCL 2015 in May 2015.
The latest version for the previous SYCL 1.2.1 series is |
https://en.wikipedia.org/wiki/Inferior%20transverse%20ligament%20of%20the%20tibiofibular%20syndesmosis | The inferior transverse ligament of the tibiofibular syndesmosis is a connective tissue structure in the lower leg that lies in front of the posterior ligament. It is a strong, thick band, of yellowish fibers which passes transversely across the back of the ankle joint, from the lateral malleolus to the posterior border of the articular surface of the tibia, almost as far as its malleolar process.
This ligament projects below the margin of the bones, and forms part of the articulating surface for the talus.
It is not included in Terminologia Anatomica, but it still appears in some anatomy textbooks. |
https://en.wikipedia.org/wiki/Tinseltown%20in%20the%20Rain | "Tinseltown in the Rain" is a song by Scottish pop band The Blue Nile. It was released as the second single from their 1984 debut album A Walk Across the Rooftops. The song was written and produced by lead singer Paul Buchanan and bassist Robert Bell. It has been described as an "ode to the city" of Glasgow.
The song reached No. 87 on the UK Singles Chart. It was a bigger hit in the Netherlands, peaking at No. 28 on the Dutch Top 40.
The song, or more specifically, its rhythm section, was also used as the theme to the TV series, Tinsel Town, set in Glasgow, and aired, firstly, on BBC Choice in 2000, and was then repeated on BBC 2 the following year. An abridged second and final series aired in 2002. Also prominently used in the opening theme was the line, "ah-oh Tinsel Town", taken from the end of the middle 8 section of the original song.
"Tinseltown in the Rain" remains a beloved staple in the Netherlands, regularly appearing on NPO Radio 2's annual Top 2000 songs of all time countdown. It reached its highest position at No. 445 in 2003.
Background
"Tinseltown in the Rain" helped The Blue Nile land a record contract. Audio engineer Calum Malcolm had been given money to record music by RSO Records and was also friends with Ivor Tiefenbrun, the founder of audio equipment company Linn Products. Malcolm played an early demo of "Tinseltown in the Rain" when staff members asked him to test out the speakers, which impressed Linn Records enough that the label contacted The Blue Nile to offer them a record deal.
The song has been described as A Walk Across the Rooftops' most upbeat track. In an 2013 interview with Dutch television program Top 2000 a Gogo, Paul Buchanan stated that "Tinseltown is a metaphor. It’s whatever your dream is, whatever your Tinseltown was, whatever you lost. And I think in our minds what was interesting to us was the kind of universal nature of cities… Glasgow’s obviously not the same scale as New York, but if you just shrunk it down to a |
https://en.wikipedia.org/wiki/REFInd | rEFInd is a boot manager for UEFI and EFI-based machines. It can be used to boot multiple operating systems that are installed on a single non-volatile device. It also provides a way to launch UEFI applications.
It was forked from discontinued rEFIt in 2012, with 0.2.0 as its first release.
rEFind supports x86, x86-64, and AArch64 architecture.
Features
rEFInd has several features:
Automatic operating systems detection.
Customisable OS launch options.
Graphical or text mode. Theme is customisable.
Mac-specific features, including spoofing booting process to enable secondary video chipsets on some Mac.
Linux-specific features, including autodetecting EFI stub loader to boot Linux kernel directly and using fstab in lieu of rEFInd configuration file for boot order.
Support for Secure Boot.
Adoption
rEFInd is the default Unified Extensible Firmware Interface (UEFI) boot manager for TrueOS.
rEFInd is included in official repositories of major Linux distributions.
Development
GNU-EFI and TianoCore are supported as main development platforms for writing binary UEFI applications in C to launch right from the rEFInd GUI menu. Typical purposes of an EFI application are fixing boot problems and programmatically modifying settings within UEFI environment, which would otherwise be performed from within the BIOS of a personal computer (PC) without UEFI.
rEFInd can be built with either GNU-EFI or TianoCore EDK2/UDK.
Fork
RefindPlus is a fork of rEFInd that add several features and improvements for Mac devices, specifically MacPro3,1 and MacPro5,1, and equivalent Xserve.
See also
GNU GRUB - Another boot loader for Unix-like systems
Comparison of boot loaders |
https://en.wikipedia.org/wiki/Tacit%20programming | Tacit programming, also called point-free style, is a programming paradigm in which function definitions do not identify the arguments (or "points") on which they operate. Instead the definitions merely compose other functions, among which are combinators that manipulate the arguments. Tacit programming is of theoretical interest, because the strict use of composition results in programs that are well adapted for equational reasoning. It is also the natural style of certain programming languages, including APL and its derivatives, and concatenative languages such as Forth. The lack of argument naming gives point-free style a reputation of being unnecessarily obscure, hence the epithet "pointless style".
Unix scripting uses the paradigm with pipes.
Examples
Python
Tacit programming can be illustrated with the following Python code. A sequence of operations such as the following:
def example(x):
return baz(bar(foo(x)))
... can be written in point-free style as the composition of a sequence of functions, without parameters:
from functools import partial, reduce
def compose(*fns):
return partial(reduce, lambda v, fn: fn(v), fns)
example = compose(foo, bar, baz)
For a more complex example, the Haskell code can be translated as:
p = partial(compose, partial(compose, f), g)
Functional programming
A simple example (in Haskell) is a program which computes the sum of a list of numbers. We can define the sum function recursively using a pointed style (cf. value-level programming) as:
sum (x:xs) = x + sum xs
sum [] = 0
However, using a fold we can replace this with:
sum xs = foldr (+) 0 xs
And then the argument is not needed, so this simplifies to
sum = foldr (+) 0
which is point-free.
Another example uses function composition:
p x y z = f (g x y) z
The following Haskell-like pseudo-code exposes how to reduce a function definition to its point-free equivalent:
p = \x -> \y -> \z -> f (g x y) z
= \x -> \y -> f (g x y)
= \x -> \y -> (f . (g x)) y
= \x |
https://en.wikipedia.org/wiki/Futures%20and%20promises | In computer science, future, promise, delay, and deferred refer to constructs used for synchronizing program execution in some concurrent programming languages. They describe an object that acts as a proxy for a result that is initially unknown, usually because the computation of its value is not yet complete.
The term promise was proposed in 1976 by Daniel P. Friedman and David Wise,
and Peter Hibbard called it eventual.
A somewhat similar concept future was introduced in 1977 in a paper by Henry Baker and Carl Hewitt.
The terms future, promise, delay, and deferred are often used interchangeably, although some differences in usage between future and promise are treated below. Specifically, when usage is distinguished, a future is a read-only placeholder view of a variable, while a promise is a writable, single assignment container which sets the value of the future. Notably, a future may be defined without specifying which specific promise will set its value, and different possible promises may set the value of a given future, though this can be done only once for a given future. In other cases a future and a promise are created together and associated with each other: the future is the value, the promise is the function that sets the value – essentially the return value (future) of an asynchronous function (promise). Setting the value of a future is also called resolving, fulfilling, or binding it.
Applications
Futures and promises originated in functional programming and related paradigms (such as logic programming) to decouple a value (a future) from how it was computed (a promise), allowing the computation to be done more flexibly, notably by parallelizing it. Later, it found use in distributed computing, in reducing the latency from communication round trips. Later still, it gained more use by allowing writing asynchronous programs in direct style, rather than in continuation-passing style.
Implicit vs. explicit
Use of futures may be implicit (any use of t |
https://en.wikipedia.org/wiki/Bulldozer%20%28microarchitecture%29 | The AMD Bulldozer Family 15h is a microprocessor microarchitecture for the FX and Opteron line of processors, developed by AMD for the desktop and server markets. Bulldozer is the codename for this family of microarchitectures. It was released on October 12, 2011, as the successor to the K10 microarchitecture.
Bulldozer is designed from scratch, not a development of earlier processors. The core is specifically aimed at computing products with TDPs of 10 to 125 watts. AMD claims dramatic performance-per-watt efficiency improvements in high-performance computing (HPC) applications with Bulldozer cores.
The Bulldozer cores support most of the instruction sets implemented by Intel processors (Sandy Bridge) available at its introduction (including SSE4.1, SSE4.2, AES, CLMUL, and AVX) as well as new instruction sets proposed by AMD; ABM, XOP, FMA4 and F16C. Only Bulldozer GEN4 (Excavator) supports AVX2 instruction sets.
Overview
According to AMD, Bulldozer-based CPUs are based on GlobalFoundries' 32 nm Silicon on insulator (SOI) process technology and reuses the approach of DEC for multitasking computer performance with the arguments that it, according to press notes, "balances dedicated and shared computer resources to provide a highly compact, high units count design that is easily replicated on a chip for performance scaling." In other words, by eliminating some of the "redundant" elements that naturally creep into multicore designs, AMD has hoped to take better advantage of its hardware capabilities, while using less power.
Bulldozer-based implementations built on 32nm SOI with HKMG arrived in October 2011 for both servers and desktops. The server segment included the dual chip (16-core) Opteron processor codenamed Interlagos (for Socket G34) and single chip (4, 6 or 8 cores) Valencia (for Socket C32), while the Zambezi (4, 6 and 8 cores) targeted desktops on Socket AM3+.
Bulldozer is the first major redesign of AMD’s processor architecture since 2003, when the |
https://en.wikipedia.org/wiki/Architecture%20of%20Windows%209x | The Windows 9x series of operating systems refers to the kernel which lies at the heart of Windows 9x. Its architecture is monolithic.
The basic code is similar in function to MS-DOS. As a 16-/32-bit hybrid, it requires support from MS-DOS to operate.
Critical files
Windows 95 boots using the following set of files:
32-bit shell and command line interpreter:
SHELL.DLL and SHELL32.DLL – Shell API
EXPLORER.EXE – Windows shell and file manager
COMMAND.COM – command line shell executable
Windows 95 core:
KERNEL32.DLL and KRNL386.EXE – Windows API for Windows resources
ADVAPI32.DLL Functionality additional to the kernel. Includes functions for the Windows registry and shutdown and restart functions
GDI32.DLL and GDI.EXE - Graphic device interface
USER32.DLL and USER.EXE - GUI implementation
COMMCTRL.DLL and COMCTL32.DLL - Common controls (user interface)
DDEML.DLL Dynamic Data Exchange Management Library (DDEML) provides an interface that simplifies the task of adding DDE capability to an application
MSGSRV32.EXE Acts as a 32-bit message server and will never appear in the Windows task list
WIN.COM - responsible for loading the GUI and the Windows portion of the system
Registry and other configuration files:
SYSTEM.DAT, USER.DAT - contains the Windows Registry
MSDOS.SYS - contains some low-level boot settings and resources such as disabling double-buffering and the GUI logo
WIN.INI and SYSTEM.INI - configuration files from Windows 3.1, processed in Windows 9x also
Virtual Machine Manager and configuration manager:
VMM32.VXD - Virtual machine manager and default drivers. It takes over from io.sys as kernel
Installable file System Manager:
IFSHLP.SYS - enables Windows to make direct file system calls bypassing MS-DOS methods
IFSMGR.VXD - 32-bit driver for the installable file system
IOS.VXD I/O Supervisor that controls and manages all protected-mode file system and block device drivers
MPREXE.EXE MPRSERV.DLL and MPR.DLL - Multiple Provider |
https://en.wikipedia.org/wiki/Glugging | Glugging (also referred to as "the glug-glug process") is the physical phenomenon which occurs when a liquid is poured rapidly from a vessel with a narrow opening, such as a bottle. It is a facet of fluid dynamics.
As liquid is poured from a bottle, the air pressure in the bottle is lowered, and air at higher pressure from outside the bottle is forced into the bottle, in the form of a bubble, impeding the flow of liquid. Once the bubble enters, more liquid escapes, and the process is repeated. The reciprocal action of glugging creates a rhythmic sound. The English word "glug" is onomatopoeic, describing this sound. Onomatopoeias in other languages include (German).
Academic papers have been written about the physics of glugging, and about the impact of glugging sounds on consumers' perception of products such as wine. Research into glugging has been done using high-speed photography.
Factors which affect glugging are the viscosity of the liquid, its carbonation, the size and shape of the container's neck and its opening (collectively referred to as "bottle geometry"), the angle at which the container is held, and the ratio of air to liquid in the bottle (which means that the rate and the sound of the glugging changes as the bottle empties). |
https://en.wikipedia.org/wiki/Mathematical%20Medicine%20and%20Biology | Mathematical Medicine and Biology is an academic journal published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. The Journal publishes articles addressing topics in medicine and biology with mathematical content.
Impact factor
Mathematical Medicine and Biology received an impact factor of 1.854 in 2020.
Editors
The editors-in-chief are Oliver E. Jensen (University of Manchester), John R. King (University of Nottingham), and James P. Keener (University of Utah). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.