source
stringlengths 31
227
| text
stringlengths 9
2k
|
|---|---|
https://en.wikipedia.org/wiki/Gallium%20indium%20arsenide%20antimonide%20phosphide
|
Gallium indium arsenide antimonide phosphide ( or GaInPAsSb) is a semiconductor material.
Research has shown that GaInAsSbP can be used in the manufacture of mid-infrared light-emitting diodes and thermophotovoltaic cells.
GaInAsSbP layers can be grown by heteroepitaxy on indium arsenide, gallium antimonide and other materials. The exact composition can be tuned in order to make it lattice matched. The presence of five elements in the alloy allows extra degrees of freedom, making it possible to fix the lattice constant while varying the bandgap. E.g. Ga0.92In0.08P0.05As0.08Sb0.87 is lattice matched to InAs.
See also
Aluminium gallium phosphide
Aluminium gallium indium phosphide
Indium gallium arsenide phosphide
Indium arsenide antimonide phosphide
Indium gallium arsenide antimonide
|
https://en.wikipedia.org/wiki/Plasma%20Bigscreen
|
Plasma Bigscreen is a software project from KDE which contains an interface optimized for Smart TVs and other computers such as the Raspberry Pi which can be connected to large displays.
Software
The desktop environment is based on KDE Plasma 5. Voice control is provided through integration with Mycroft AI. Plasma Bigscreen supports HDMI-CEC.
Availability
Plasma Bigscreen is currently available as a KDE Neon-based image, or installable on postmarketOS.
See also
Plasma Mobile
Mycroft (software)
|
https://en.wikipedia.org/wiki/Hamilton%20Ecological%20District
|
Hamilton Ecological District is part of the Waikato Ecological Region in New Zealand's North Island. It occupies the Hamilton basin and surrounding foothills, and has been heavily modified with less than two percent of its indigenous vegetation remaining. This location has been studied significantly including the process of restoration ecology.
C. Michael Hogan has classified the undisturbed portions of the woodland area as a beech and podocarp forest with associate understory ferns being Icarus filiformis, Asplenium flaccidum, Doodia media, Hymenophyllum demissum, Zealandia pustulata and Dendroconche scandens, and some prominent associate shrubs being Olearia ranii and Alseuosmia quercifolia.
See also
Metrosideros
|
https://en.wikipedia.org/wiki/Balmer%20jump
|
The Balmer jump, Balmer discontinuity, or Balmer break is the difference of intensity of the stellar continuum spectrum on either side of the limit of the Balmer series of hydrogen, at approximately 364.5 nm. It is caused by electrons being completely ionized directly from the second energy level of a hydrogen atom (bound-free absorption), which creates a continuum absorption at wavelengths shorter than 364.5 nm.
In some cases the Balmer discontinuity can show continuum emission, usually when the Balmer lines themselves are strongly in emission. Other hydrogen spectral series also show bound-free absorption and hence a continuum discontinuity, but the Balmer jump in the near UV has been the most observed.
The strength of the continuum absorption, and hence the size of the Balmer jump, depends on temperature and density in the region responsible for the absorption. At cooler stellar temperatures, the density most strongly affects the strength of the discontinuity and this can be used to classify stars on the basis of their surface gravity and hence luminosity. This effect is strongest in A class stars, but in hotter stars temperature has a much larger effect on the Balmer jump than surface gravity.
See also
Lyman-break galaxy
|
https://en.wikipedia.org/wiki/Ernest%20Preston%20Lane
|
Ernest Preston Lane (28 November 1886, Russellville, Tennessee – October 1969) was an American mathematician, specializing in differential geometry.
Education and career
In 1909, he received his bachelor's degree in from the University of Tennessee. Later in life, he went on to receive his master's degree from the University of Virginia in 1913. He taught mathematics at several academic institutions before receiving in 1918 from the University of Chicago his PhD under Ernest Julius Wilczynski with thesis Conjugate systems with indeterminate axis curves. At the University of Wisconsin Lane was from 1919 to 1923 an assistant professor. At the University of Chicago he was from 1923 to 1927 an assistant professor, from 1927 to 1928 an associate professor, and from 1928 to 1952 a full professor, retiring in 1952 as professor emeritus. He was the chair of the University of Chicago's mathematics department from 1941 to 1946.
Lane was a Guggenheim Fellow for the academic year 1926–1927. His doctoral students include Alice T. Schafer, and Sun Guangyuan.
Selected publications
|
https://en.wikipedia.org/wiki/D-Grid
|
The D-Grid Initiative (German Grid Initiative) was a government project to fund computer infrastructure for education and research (e-Science) in Germany. It uses the term grid computing.
D-Grid started September 1, 2005 with six community projects and an integration project (DGI) as well as several partner projects.
Integration project
The D-Grid integration project intended to integrate community projects. The D-Grid integration project acted as a service provider for the science community in Germany. The project office is located at the Institute for Scientific Computing (IWR) at Forschungszentrum Karlsruhe. The resources to ensure a sustainable Grid infrastructure are provided by four work packages:
D-Grid Base-Software: The major task of this work package is to provide several different middleware packages. These are the Globus Toolkit, UNICORE, LCG/gLite, GridSphere and the Grid Application Toolkit (GAT). The community projects linked together in the D-Grid integration project are supported during the installation, operation and if needed and possible the customisation of the Base-Software.
Deployment and operation of the D-Grid infrastructure: Work package 2 builds up a Core-D-Grid. It was used as a prototype to test the operational functionality of the system. This work package also deals with monitoring, accounting and billing.
Networks and Security: The network infrastructure in D-Grid is based on the DFN Wissenschaftsnetz X-WiN. Work package 3 will provide extensions to the existing network infrastructure according to the needs of Grid middleware used in D-Grid. Further tasks are to build an AA-Infrastructure in D-Grid, develop firewall concepts for Grid environments and set up Grid specific CERT services.
D-Grid project office: The work package is responsible for the integration of community projects into one common D-Grid platform. Work package 4 also deals with sustainability.
Communities
Six community projects participated in the D-Grid In
|
https://en.wikipedia.org/wiki/Zuckerman%20functor
|
In mathematics, a Zuckerman functor is used to construct representations of real reductive Lie groups from representations of Levi subgroups. They were introduced by Gregg Zuckerman (1978). The Bernstein functor is closely related.
Notation and terminology
G is a connected reductive real affine algebraic group (for simplicity; the theory works for more general groups), and g is the Lie algebra of G. K is a maximal compact subgroup of G.
L is a Levi subgroup of G, the centralizer of a compact connected abelian subgroup, and *l is the Lie algebra of L.
A representation of K is called K-finite if every vector is contained in a finite-dimensional representation of K. Denote by WK the subspace of K-finite vectors of a representation W of K.
A (g,K)-module is a vector space with compatible actions of g and K, on which the action of K is K-finite.
R(g,K) is the Hecke algebra of G of all distributions on G with support in K that are left and right K finite. This is a ring which does not have an identity but has an approximate identity, and the approximately unital R(g,K)- modules are the same as (g,K) modules.
Definition
The Zuckerman functor Γ is defined by
and the Bernstein functor Π is defined by
|
https://en.wikipedia.org/wiki/Haline%20contraction%20coefficient
|
The Haline contraction coefficient, abbreviated as β, is a coefficient that describes the change in ocean density due to a salinity change, while the potential temperature and the pressure are kept constant. It is a parameter in the Equation Of State (EOS) of the ocean. β is also described as the saline contraction coefficient and is measured in [kg]/[g] in the EOS that describes the ocean. An example is TEOS-10. This is the thermodynamic equation of state.
β is the salinity variant of the thermal expansion coefficient α, where the density changes due to a change in temperature instead of salinity. With these two coefficients, the density ratio can be calculated. This determines the contribution of the temperature and salinity to the density of a water parcel.
β is called a contraction coefficient, because when salinity increases, water becomes denser, and if the temperature increases, water becomes less dense.
Definition
Τhe haline contraction coefficient is defined as:
where ρ is the density of a water parcel in the ocean and S is the absolute salinity. The subscripts Θ and p indicate that β is defined at constant potential temperature Θ and constant pressure p. The haline contraction coefficient is constant when a water parcel moves adiabatically along the isobars.
Application
The amount that density is influenced by a change in salinity or temperature can be computed from the density formula that is derived from the thermal wind balance.
The Brunt–Väisälä frequency can also be defined when β is known, in combination with α, Θ and S. This frequency is a measure of the stratification of a fluid column and is defined over depth as:
.
The direction of the mixing and whether the mixing is temperature- or salinity-driven can be determined from the density difference and the Brunt-Väisälä frequency.
Computation
β can be computed when the conserved temperature, the absolute salinity and the pressure are known from a water parcel. Python offers the G
|
https://en.wikipedia.org/wiki/Gaussian%20curvature
|
In differential geometry, the Gaussian curvature or Gauss curvature of a smooth surface in three-dimensional space at a point is the product of the principal curvatures, and , at the given point:
The Gaussian radius of curvature is the reciprocal of .
For example, a sphere of radius has Gaussian curvature everywhere, and a flat plane and a cylinder have Gaussian curvature zero everywhere. The Gaussian curvature can also be negative, as in the case of a hyperboloid or the inside of a torus.
Gaussian curvature is an intrinsic measure of curvature, depending only on distances that are measured “within” or along the surface, not on the way it is isometrically embedded in Euclidean space. This is the content of the Theorema egregium.
Gaussian curvature is named after Carl Friedrich Gauss, who published the Theorema egregium in 1827.
Informal definition
At any point on a surface, we can find a normal vector that is at right angles to the surface; planes containing the normal vector are called normal planes. The intersection of a normal plane and the surface will form a curve called a normal section and the curvature of this curve is the normal curvature. For most points on most “smooth” surfaces, different normal sections will have different curvatures; the maximum and minimum values of these are called the principal curvatures, call these , . The Gaussian curvature is the product of the two principal curvatures .
The sign of the Gaussian curvature can be used to characterise the surface.
If both principal curvatures are of the same sign: , then the Gaussian curvature is positive and the surface is said to have an elliptic point. At such points, the surface will be dome like, locally lying on one side of its tangent plane. All sectional curvatures will have the same sign.
If the principal curvatures have different signs: , then the Gaussian curvature is negative and the surface is said to have a hyperbolic or saddle point. At such points, the surface will be sa
|
https://en.wikipedia.org/wiki/Seventh%20power
|
In arithmetic and algebra the seventh power of a number n is the result of multiplying seven instances of n together. So:
.
Seventh powers are also formed by multiplying a number by its sixth power, the square of a number by its fifth power, or the cube of a number by its fourth power.
The sequence of seventh powers of integers is:
0, 1, 128, 2187, 16384, 78125, 279936, 823543, 2097152, 4782969, 10000000, 19487171, 35831808, 62748517, 105413504, 170859375, 268435456, 410338673, 612220032, 893871739, 1280000000, 1801088541, 2494357888, 3404825447, 4586471424, 6103515625, 8031810176, ...
In the archaic notation of Robert Recorde, the seventh power of a number was called the "second sursolid".
Properties
Leonard Eugene Dickson studied generalizations of Waring's problem for seventh powers, showing that every non-negative integer can be represented as a sum of at most 258 non-negative seventh powers (17 is 1, and 27 is 128). All but finitely many positive integers can be expressed more simply as the sum of at most 46 seventh powers. If powers of negative integers are allowed, only 12 powers are required.
The smallest number that can be represented in two different ways as a sum of four positive seventh powers is 2056364173794800.
The smallest seventh power that can be represented as a sum of eight distinct seventh powers is:
The two known examples of a seventh power expressible as the sum of seven seventh powers are
(M. Dodrill, 1999);
and
(Maurice Blondot, 11/14/2000);
any example with fewer terms in the sum would be a counterexample to Euler's sum of powers conjecture, which is currently only known to be false for the powers 4 and 5.
See also
Eighth power
Sixth power
Fifth power (algebra)
Fourth power
Cube (algebra)
Square (algebra)
|
https://en.wikipedia.org/wiki/Mental%20age
|
Mental age is a concept related to intelligence. It looks at how a specific individual, at a specific age, performs intellectually, compared to average intellectual performance for that individual's actual chronological age (i.e. time elapsed since birth). The intellectual performance is based on performance in tests and live assessments by a psychologist. The score achieved by the individual is compared to the median average scores at various ages, and the mental age (x, say) is derived such that the individual's score equates to the average score at age x.
However, mental age depends on what kind of intelligence is measured. For instance, a child's intellectual age can be average for their actual age, but the same child's emotional intelligence can be immature for their physical age. Psychologists often remark that girls are more emotionally mature than boys at around the age of puberty. Also, a six-year-old child intellectually gifted can remain a three-year-old child in terms of emotional maturity. Mental age can be considered a controversial concept.
History
Early theories
During much of the 19th century, theories of intelligence focused on measuring the size of human skulls. Anthropologists well known for their attempts to correlate cranial size and capacity with intellectual potential were Samuel Morton and Paul Broca.
The modern theories of intelligence began to emerge along with experimental psychology. This is when much of psychology was moving from philosophical to more biology and medical science basis. In 1890, James Cattell published what some consider the first "mental test". Cattell was more focused on heredity rather than environment. This spurs much of the debate about the nature of intelligence.
Mental age was first defined by the French psychologist Alfred Binet, who introduced the intelligence test in 1905, with the assistance of Theodore Simon. Binet's experiments on French schoolchildren laid the framework for future experiments into
|
https://en.wikipedia.org/wiki/Kathaleen%20Land
|
Bonnie Kathaleen Land (21 November 1918 - 7 October 2012) was a computer and mathematician at NASA's Langley facility. The 2016 movie Hidden Figures, which brought awareness to this early success within the NASA space program, was written by Land's former Sunday school student, and Land served as one of the first interviewees during research for the novel. Land was called the "inspiration behind, catalyst for, and gateway to" the creation of Hidden Figures.
She was married to Stanley Land and had three daughters. She died on 7 October 2012.
Biography
Bonnie Kathaleen Pleasants was born on 21 November 1918 in Bridgewater, Virginia. She married Stanley Land on 1 December 1941 in Newport News, Virginia, and they had three daughters.
She worked as a human computer and mathematician at NASA's Langley Research Center facility. When Margot Lee Shetterly, the author of Hidden Figures, was a child, Land taught her in Sunday school following Land's retirement from NASA. Land was one of the first people Shetterly interviewed when she began researching for the Hidden Figures book, and Land provided several of the names of the human computers who were featured in the book and film. She is described as "the inspiration behind, catalyst for, and gateway to Hidden Figures".
Land died on 7 October 2012 in Hampton, Virginia.
See also
Katherine Johnson
Dorothy Vaughan
Mary Jackson
|
https://en.wikipedia.org/wiki/MecA
|
mecA is a gene found in bacterial cells which allows them to be resistant to antibiotics such as methicillin, penicillin and other penicillin-like antibiotics.
The bacteria strain most commonly known to carry mecA is methicillin-resistant Staphylococcus aureus (MRSA). In Staphylococcus species, mecA is spread through the staphylococcal chromosome cassette SCCmec genetic element. Resistant strains cause many hospital-acquired infections.
mecA encodes the protein PBP2A (penicillin-binding protein 2A), a transpeptidase that helps form the bacterial cell wall. PBP2A has a lower affinity for beta-lactam antibiotics such as methicillin and penicillin than DD-transpeptidase does, so it does not bind to the ringlike structure of penicillin-like antibiotics. This enables transpeptidase activity in the presence of beta-lactams, preventing them from inhibiting cell wall synthesis. The bacteria can then replicate as normal.
History
Methicillin resistance first emerged in hospitals in Staphylococcus aureus that was more aggressive and failed to respond to methicillin treatment. The prevalence of this strain, MRSA, continued to increase, reaching up to 60% of British hospitals, and has spread throughout the world and beyond hospital settings. Researchers traced the source of this resistance to the mecA gene acquired through a mobile genetic element, staphylococcal cassette chromosome mec, present in all known MRSA strains. On February 27, 2017, the World Health Organization (WHO) put MRSA on their list of priority bacterial resistant pathogens and made it a high priority target for further research and treatment development.
Detection
Successful treatment of MRSA begins with the detection of mecA, usually through polymerase chain reaction (PCR). Alternative methods include enzymatic detection PCR, which labels the PCR with enzymes detectable by immunoabsorbant assays. This takes less time and does not need gel electrophoresis, which can be costly, tedious, and unpredictabl
|
https://en.wikipedia.org/wiki/A%20Course%20of%20Modern%20Analysis
|
A Course of Modern Analysis: an introduction to the general theory of infinite processes and of analytic functions; with an account of the principal transcendental functions (colloquially known as Whittaker and Watson) is a landmark textbook on mathematical analysis written by Edmund T. Whittaker and George N. Watson, first published by Cambridge University Press in 1902. The first edition was Whittaker's alone, but later editions were co-authored with Watson.
History
Its first, second, third, and the fourth edition were published in 1902, 1915, 1920, and 1927, respectively. Since then, it has continuously been reprinted and is still in print today. A revised, expanded and digitally reset fifth edition, edited by Victor H. Moll, was published in 2021.
The book is notable for being the standard reference and textbook for a generation of Cambridge mathematicians including Littlewood and Godfrey H. Hardy. Mary L. Cartwright studied it as preparation for her final honours on the advice of fellow student Vernon C. Morton, later Professor of Mathematics at Aberystwyth University. But its reach was much further than just the Cambridge school; André Weil in his obituary of the French mathematician Jean Delsarte noted that Delsarte always had a copy on his desk. In 1941 the book was included among a "selected list" of mathematical analysis books for use in universities in an article for that purpose published by American Mathematical Monthly.
Notable features
Some idiosyncratic but interesting problems from an older era of the Cambridge Mathematical Tripos are in the exercises.
The book was one of the earliest to use decimal numbering for its sections, an innovation the authors attribute to Giuseppe Peano.
Contents
Below are the contents of the fourth edition:
Part I. The Process of Analysis
Part II. The Transcendental Functions
Reception
Reviews of the first edition
George B. Mathews, in a 1903 review article published in The Mathematical Gazette opens by saying t
|
https://en.wikipedia.org/wiki/Sequence%20alignment
|
In bioinformatics, a sequence alignment is a way of arranging the sequences of DNA, RNA, or protein to identify regions of similarity that may be a consequence of functional, structural, or evolutionary relationships between the sequences. Aligned sequences of nucleotide or amino acid residues are typically represented as rows within a matrix. Gaps are inserted between the residues so that identical or similar characters are aligned in successive columns.
Sequence alignments are also used for non-biological sequences, such as calculating the distance cost between strings in a natural language or in financial data.
Interpretation
If two sequences in an alignment share a common ancestor, mismatches can be interpreted as point mutations and gaps as indels (that is, insertion or deletion mutations) introduced in one or both lineages in the time since they diverged from one another. In sequence alignments of proteins, the degree of similarity between amino acids occupying a particular position in the sequence can be interpreted as a rough measure of how conserved a particular region or sequence motif is among lineages. The absence of substitutions, or the presence of only very conservative substitutions (that is, the substitution of amino acids whose side chains have similar biochemical properties) in a particular region of the sequence, suggest that this region has structural or functional importance. Although DNA and RNA nucleotide bases are more similar to each other than are amino acids, the conservation of base pairs can indicate a similar functional or structural role.
Alignment methods
Very short or very similar sequences can be aligned by hand. However, most interesting problems require the alignment of lengthy, highly variable or extremely numerous sequences that cannot be aligned solely by human effort. Instead, human knowledge is applied in constructing algorithms to produce high-quality sequence alignments, and occasionally in adjusting the final results
|
https://en.wikipedia.org/wiki/X-machine
|
The X-machine (XM) is a theoretical model of computation introduced by Samuel Eilenberg in 1974.<ref
name="Eil74">S. Eilenberg (1974)
Automata, Languages and Machines, Vol. A.
Academic Press, London.</ref>
The X in "X-machine" represents the fundamental data type on which the machine operates; for example, a machine that operates on databases (objects of type database) would be a database-machine. The X-machine model is structurally the same as the finite-state machine, except that the symbols used to label the machine's transitions denote relations of type X→X. Crossing a transition is equivalent to applying the relation that labels it (computing a set of changes to the data type X), and traversing a path in the machine corresponds to applying all the associated relations, one after the other.
Original theory
Eilenberg's original X-machine was a completely general theoretical model of computation (subsuming the Turing machine, for example), which admitted deterministic, non-deterministic and non-terminating computations. His seminal work published many variants of the basic X-machine model, each of which generalized the finite-state machine in a slightly different way.
In the most general model, an X-machine is essentially a "machine for manipulating objects of type X". Suppose that X is some datatype, called the fundamental datatype, and that Φ is a set of (partial) relations φ: X → X. An X-machine is a finite-state machine whose arrows are labelled by relations in Φ. In any given state, one or more transitions may be enabled if the domain of the associated relation φi accepts (a subset of) the current values stored in X. In each cycle, all enabled transitions are assumed to be taken. Each recognised path through the machine generates a list φ1 ... φn of relations. We call the composition φ1 o ... o φn of these relations the path relation corresponding to that path. The behaviour of the X-machine is defined to be the union of all the behaviours compu
|
https://en.wikipedia.org/wiki/Semi-infinite%20programming
|
In optimization theory, semi-infinite programming (SIP) is an optimization problem with a finite number of variables and an infinite number of constraints, or an infinite number of variables and a finite number of constraints. In the former case the constraints are typically parameterized.
Mathematical formulation of the problem
The problem can be stated simply as:
where
SIP can be seen as a special case of bilevel programs in which the lower-level variables do not participate in the objective function.
Methods for solving the problem
In the meantime, see external links below for a complete tutorial.
Examples
In the meantime, see external links below for a complete tutorial.
See also
Optimization
Generalized semi-infinite programming (GSIP)
|
https://en.wikipedia.org/wiki/Virtualization
|
In computing, virtualization or virtualisation (sometimes abbreviated v12n, a numeronym) is the act of creating a virtual (rather than actual) version of something at the same abstraction level, including virtual computer hardware platforms, storage devices, and computer network resources.
Virtualization began in the 1960s, as a method of logically dividing the system resources provided by mainframe computers between different applications. An early and successful example is IBM CP/CMS. The control program CP provided each user with a simulated stand-alone System/360 computer. Since then, the meaning of the term has broadened.
Hardware virtualization
Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Arch Linux may host a virtual machine that looks like a computer with the Microsoft Windows operating system; Windows-based software can be run on the virtual machine.
In hardware virtualization, the host machine is the machine that is used by the virtualization and the guest machine is the virtual machine. The words host and guest are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or virtual machine monitor.
Different types of hardware virtualization include:
Full virtualization – Almost complete simulation of the actual hardware to allow software environments, including a guest operating system and its apps, to run unmodified.
Paravirtualization – The guest apps are executed in their own isolated domains, as if they are running on a separate system, but a hardware environment is not simulated. Guest programs need to be specifically modified to run in this env
|
https://en.wikipedia.org/wiki/Laudate%20Deum
|
Laudate Deum (Praise God) is an apostolic exhortation by Pope Francis, published on October 4, 2023. The text is divided into 73 paragraphs.
Pope Francis calls for brisk action against the climate crisis and condemns climate change denial in his writing.
It is Pope Francis' sixth apostolic exhortation.
Origin of the document
The title refers to the words of St. Francis of Assisi and to the encyclical Laudato si', which was published in 2015. The main goal of "Laudate Deum" is to call once again on all people of good will to care for the poor and for the Earth.
Content
In this document, the Pope expresses hope that societies around the world will change their lifestyles and intensify grassroots activities aimed at reducing the negative human impact on the natural environment, to prevent even more tragic damage to the Earth. The dramatic environmental degradation strongly affects not only the indigenous peoples, the poor, and endangered species, but also the future of all young people. He also calls on politicians and the rich to work for the common good, and not for their own profit and particular interests. Finally (in paragraph 73) the Pope emphasises that "when human beings claim to take God’s place, they become their own worst enemies".
|
https://en.wikipedia.org/wiki/Deep%20fibular%20nerve
|
The deep fibular nerve (also known as deep peroneal nerve) begins at the bifurcation of the common fibular nerve between the fibula and upper part of the fibularis longus, passes infero-medially, deep to the extensor digitorum longus, to the anterior surface of the interosseous membrane, and comes into relation with the anterior tibial artery above the middle of the leg; it then descends with the artery to the front of the ankle-joint, where it divides into a lateral and a medial terminal branch.
Structure
Lateral side of the leg
The deep fibular nerve is the nerve of the anterior compartment of the leg and the dorsum of the foot. It is one of the terminal branches of the common fibular nerve. It corresponds to the posterior interosseus nerve of the forearm. It begins at the lateral side of the fibula bone, and then enters the anterior compartment by piercing the anterior intermuscular septum. It then pierces the extensor digitorum longus and lies next to the anterior tibial artery, following the course of the artery until the ankle-joint where the nerve divides into medial and lateral terminal branches. In the leg, the deep fibular nerve divides into several branches:
Muscular branches: Supplies four muscles in the leg: tibialis anterior, extensor hallucis longus, extensor digitorum longus, and fibularis tertius
Foot
Close to the ankle joint, the deep fibular nerve terminates by dividing into medial and lateral terminal branches.
Medial terminal branch: This nerve accompanies the dorsalis pedis artery along the dorsum of the foot, and, at the first interosseous space, divides into two dorsal digital nerves which supply the adjacent sides of the great and second toes, communicating with the medial dorsal cutaneous branch of the superficial fibular nerve. Before it divides it gives off to the first space an interosseous branch which supplies the metatarsophalangeal joint of the great toe and sends a filament to the first Interosseous dorsalis muscle.
Lateral t
|
https://en.wikipedia.org/wiki/Endogenous%20infection
|
In medicine, an endogenous infection is a disease arising from an infectious agent already present in the body but previously asymptomatic.
|
https://en.wikipedia.org/wiki/Engineering%20mathematics
|
Engineering mathematics is a branch of applied mathematics concerning mathematical methods and techniques that are typically used in engineering and industry. Along with fields like engineering physics and engineering geology, both of which may belong in the wider category engineering science, engineering mathematics is an interdisciplinary subject motivated by engineers' needs both for practical, theoretical and other considerations outside their specialization, and to deal with constraints to be effective in their work.
Description
Historically, engineering mathematics consisted mostly of applied analysis, most notably: differential equations; real and complex analysis (including vector and tensor analysis); approximation theory (broadly construed, to include asymptotic, variational, and perturbative methods, representations, numerical analysis); Fourier analysis; potential theory; as well as linear algebra and applied probability, outside of analysis. These areas of mathematics were intimately tied to the development of Newtonian physics, and the mathematical physics of that period. This history also left a legacy: until the early 20th century subjects such as classical mechanics were often taught in applied mathematics departments at American universities, and fluid mechanics may still be taught in (applied) mathematics as well as engineering departments.
The success of modern numerical computer methods and software has led to the emergence of computational mathematics, computational science, and computational engineering (the last two are sometimes lumped together and abbreviated as CS&E), which occasionally use high-performance computing for the simulation of phenomena and the solution of problems in the sciences and engineering. These are often considered interdisciplinary fields, but are also of interest to engineering mathematics.
Specialized branches include engineering optimization and engineering statistics.
Engineering mathematics in tertiary educ
|
https://en.wikipedia.org/wiki/Sphenacodontoidea
|
Sphenacodontoidea is a node-based clade that is defined to include the most recent common ancestor of Sphenacodontidae and Therapsida and its descendants (including mammals). Sphenacodontoids are characterised by a number of synapomorphies concerning proportions of the bones of the skull and the teeth.
The sphenacodontoids evolved from earlier sphenacodonts such as Haptodus via a number of transitional stages of small, unspecialised pelycosaurs.
Classification
The following taxonomy follows Fröbisch et al. (2011) and Benson (2012) unless otherwise noted.
Class Synapsida
Sphenacodontoidea
Family †Sphenacodontidae
Therapsida
See also
Evolution of mammals
|
https://en.wikipedia.org/wiki/Shadow%20biosphere
|
A shadow biosphere is a hypothetical microbial biosphere of Earth that would use radically different biochemical and molecular processes from that of currently known life. Although life on Earth is relatively well studied, if a shadow biosphere exists it may still remain unnoticed, because the exploration of the microbial world targets primarily the biochemistry of the macro-organisms.
The hypothesis
It has been proposed that the early Earth hosted multiple origins of life, some of which produced chemical variations on life as we know it. Some argue that these alternative life forms could have become extinct, either by being out-competed by other forms of life, or they might have become one with the present day life via mechanisms like lateral gene transfer. Others, however, argue that this other form of life might still exist to this day.
Steven A. Benner, Alonso Ricardo, and Matthew A. Carrigan, biochemists at the University of Florida, argued that if organisms based on RNA once existed, they might still be alive today, unnoticed because they do not contain ribosomes, which are usually used to detect living microorganisms. They suggest searching for them in environments that are low in sulfur, environments that are spatially constrained (for example, minerals with pores smaller than one micrometre), or environments that cycle between extreme hot and cold.
Other proposed candidates for a shadow biosphere include organisms using different suites of amino acids in their proteins or different molecular units (e.g., bases or sugars) in their nucleic acids, having a chirality opposite of ours, using some of the nonstandard amino acids, or using arsenic instead of phosphorus, having a different genetic code, or even another kind of chemical for its genetic material that are not nucleic acids (DNA nor RNA) chains or biopolymers. Carol Cleland, a philosopher of science at the University of Colorado (Boulder), argues that desert varnish, whose status as biological or non
|
https://en.wikipedia.org/wiki/Forensic%20limnology
|
Forensic limnology is a sub-field of freshwater ecology, which focuses especially on the presence of diatoms in crime scene samples and victims. Different methods are used to collect this data but all identify the ratios of different diatom colonies present in samples and match those samples with locations at the crime scene.
Use of diatoms
Diatoms are diverse microscopic algae with silica cell walls that have different characteristics such as color, shape, and size. There are 8,000 known species of diatoms. Diatoms do not have specialized nutrient and water conducting tissues, which affects their dispersion throughout ecosystems. These microscopic organisms mainly inhabit freshwater environments because of their inability to survive the cleaning agents present in domestic water sources.
Benefits of diatoms
Diatoms are identifiable based on each species' unique silica cell walls (known as "frustules") and vary depending on their environment. Because they have determinant properties diatoms create flora profiles for scientists. When these microscopic algae die, their frustules become a part of the water sediment. The frustules of the deceased organisms can be compared with the living diatoms to determine characteristics of their environment. In early spring and the fall, the ratio of living diatoms to dead diatoms is high, whereas, in summer and winter the amount of dead diatoms outpopulates the living. Based on this known information, diatoms can verify the time of year that samples were taken. Different types of diatoms can also be used to identify the properties of a sample's ecosystem. For example, a higher ratio of periphytic diatoms (i.e., those that are attached to a substrate), the higher the vegetation concentration, and the shallower the water. The reason diatoms are a common tool to match water environments is because of the variability of their populations are predictable and constant, the organisms can be identified by using the light microscope, and t
|
https://en.wikipedia.org/wiki/Tsinghua%20Bamboo%20Slips
|
The Tsinghua Bamboo Strips () are a collection of Chinese texts dating to the Warring States period and written in ink on strips of bamboo, that were acquired in 2008 by Tsinghua University, China. The texts were obtained by illegal excavation, probably of a tomb in the area of Hubei or Hunan province, and were then acquired and donated to the university by an alumnus. The very large size of the collection and the significance of the texts for scholarship make it one of the most important discoveries of early Chinese texts to date.
On 7 January 2014 the journal Nature announced that a portion of the Tsinghua Bamboo Strips represent "the world's oldest example" of a decimal multiplication table.
Discovery, conservation and publication
The Tsinghua Bamboo Strips (TBS) were donated to Tsinghua University in July 2008 by an alumnus of the university. The precise location(s) and date(s) of the illicit excavation that yielded the strips remain(s) unknown. An article in the Guangming Daily named the donor as Zhao Weiguo (), and stated that the texts were purchased at "a foreign auction", Neither the name of the auction house, nor the location or sum involved in the transaction were mentioned. Li Xueqin, the director of the conservation and research project, has stated that the wishes of the alumnus to maintain his identity secret will be respected.
Similarities with previous discoveries, such as the manuscripts from the Guodian tomb, indicate that the TBS came from a mid-to-late Warring States Period (480–221BC) tomb in the region of China culturally dominated at that time by the Chu state. A single radiocarbon date (305±30BC) and the style of ornament on the accompanying box are in keeping with this conclusion. By the time they reached the university, the strips were badly affected by mold. Conservation work on the strips was carried out, and a Center for Excavated Texts Research and Preservation was established at Tsinghua on April 25, 2009. There are 2388 strips alto
|
https://en.wikipedia.org/wiki/United%20States%20Postal%20Service%20irradiated%20mail
|
Irradiated mail is mail that has been deliberately exposed to radiation, typically in an effort to disinfect it. The most notable instance of mail irradiation in the US occurred in response to the 2001 anthrax attacks; the level of radiation chosen to kill anthrax spores was so high that it often changed the physical appearance of the mail.
The United States Postal Service began to irradiate mail in November 2001, in response to the discovery of large-scale contamination at several of its facilities that handled the letters that were sent in the attacks.
A facility in Bridgeport, New Jersey, operated by Sterigenics International, uses a Rhodotron continuous wave electron beam accelerator built by IBA Industrial, to irradiate the mail. A few facilities were planning to use cobalt-60 sources, though it is unclear whether this was ever done.
Effect on mail
The USPS warned that a number of products could be adversely affected, such as seeds, photographic film, biological samples, food, medicines, and electronic equipment. In the process of irradiation, mail is exposed to extreme heat. Paper is weakened and may appear to have been aged, with discoloration (e.g., yellowing), and brittleness. Pages may break, crumble, or fuse to other pages. Documents bound with glue may have loose pages. The printing on pages may be distorted or offset onto adjacent pages. If tape is affixed to address labels, the address may be illegible.
Irradiation's effects on paper caused some alarm in the philatelic world, which sends large numbers of rare postage stamps and covers through the mail. A number of auction houses stopped sending material through the mail, and Linn's Stamp News regularly featured reports on stamps and covers that had been ruined by irradiation.
Although at one time the USPS expected to irradiate all mail, it later scaled back to just treating mail sent to government offices, including all mail directed to the White House, Congress, and the Library of Congress.
In
|
https://en.wikipedia.org/wiki/Posterior%20cord
|
The posterior cord is a part of the brachial plexus. It consists of contributions from all of the roots of the brachial plexus.
The posterior cord gives rise to the following nerves:
Additional images
|
https://en.wikipedia.org/wiki/Multiple-criteria%20decision%20analysis
|
Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision making (both in daily life and in settings such as business, government and medicine). Conflicting criteria are typical in evaluating options: cost or price is usually one of the main criteria, and some measure of quality is typically another criterion, easily in conflict with the cost. In purchasing a car, cost, comfort, safety, and fuel economy may be some of the main criteria we consider – it is unusual that the cheapest car is the most comfortable and the safest one. In portfolio management, managers are interested in getting high returns while simultaneously reducing risks; however, the stocks that have the potential of bringing high returns typically carry high risk of losing money. In a service industry, customer satisfaction and the cost of providing service are fundamental conflicting criteria.
In their daily lives, people usually weigh multiple criteria implicitly and may be comfortable with the consequences of such decisions that are made based on only intuition. On the other hand, when stakes are high, it is important to properly structure the problem and explicitly evaluate multiple criteria. In making the decision of whether to build a nuclear power plant or not, and where to build it, there are not only very complex issues involving multiple criteria, but there are also multiple parties who are deeply affected by the consequences.
Structuring complex problems well and considering multiple criteria explicitly leads to more informed and better decisions. There have been important advances in this field since the start of the modern multiple-criteria decision-making discipline in the early 1960s. A variety of approaches and methods, many implemented by specialized decision-making software, have been developed for their application in an array of disciplines
|
https://en.wikipedia.org/wiki/FRW/CFT%20duality
|
The FRW/CFT duality is a conjectured duality for Friedmann–Robertson–Walker models inspired by the AdS/CFT correspondence. It assumes that the cosmological constant is exactly zero, which is only the case for models with exact unbroken supersymmetry. Because the energy density does not approach zero as we approach spatial infinity, the metric is not asymptotically flat. This is not an asymptotically cold solution.
Overview
In eternal inflation, our universe passes through a series of phase transitions with progressively lower cosmological constant. Our current phase has a cosmological constant of size , which is conjectured to be metastable in string theory. It is possible our universe might tunnel into a supersymmetric phase with an exactly zero cosmological constant. In fact, any particle in eternal inflation will eventually terminate in a phase with exactly zero or negative cosmological constant. The phases with negative cosmological constant will end in a Big Crunch. Shenkar and Leonard Susskind called this the Census Taker's Hat.
The conformal compactification of the terminal phase has a Penrose diagram shaped like a hat for future null infinity. A Euclidean Liouville quantum field theory is assumed to reside there. The null coordinate corresponds to the running of the renormalization group.
The terminal phase has an ever-expanding FRW metric in which the average energy density goes to zero.
|
https://en.wikipedia.org/wiki/Bat%20coronavirus%20HKU10
|
Bat coronavirus HKU10 is a species of coronavirus in the genus Alphacoronavirus.
|
https://en.wikipedia.org/wiki/Daniel%20T%C4%83taru
|
Daniel Ioan Tătaru (born 6 May 1967, Piatra Neamţ, Romania) is a Romanian mathematician at University of California, Berkeley.
He earned his doctorate from the University of Virginia in 1992, under supervision of Irena Lasiecka.
He won the 2002 Bôcher Memorial Prize for his research on partial differential equations. In 2012 he became a fellow of the American Mathematical Society. In 2013 he was selected as a Simons Investigator in mathematics.
|
https://en.wikipedia.org/wiki/Island%20Conservation
|
Island Conservation is a non-profit organization with the mission to prevent extinctions by removing invasive species from islands. Island Conservation has therefore focused its efforts on islands with species categorized as Critically Endangered and Endangered on the IUCN's Red List. Working in partnership with local communities, government management agencies, and conservation organizations, Island Conservation develops plans and implements the removal of invasive alien species, and conducts field research to document the benefits of the work and to inform future projects.
Island Conservation's approach is now being shown to have a wider beneficial effect on the marine systems surrounding its project areas. In addition, invasive vertebrate eradication has now been shown to have many benefits besides conservation of species. Specifically, the approach has been found to align with 13 UN Sustainable Development Goals and 42 associated targets encompassing marine and terrestrial biodiversity conservation, promotion of local and global partnerships, economic development, climate change mitigation, human health and sanitation and sustainable production and consumption.
To date Island Conservation has deployed teams to protect 1,195 populations of 487 species and subspecies on 64 islands.
The work of Island Conservation is not without controversy, This is documented in the book Battle at the End of Eden. Restoring islands requires removing whole populations of an invasive species. There is an ethical question of whether humankind has the right to remove one species to save others. However, a 2019 study suggests that if eradications of invasive animals were conducted on just 169 islands, the survival prospects of 9.4% of the Earth's most highly threatened terrestrial insular vertebrates would be improved.
History
Island Conservation was founded by Bernie Tershy and Don Croll, both Professors at UCSC's Long Marine Lab. These scientists learned about the story of Clipp
|
https://en.wikipedia.org/wiki/Gravitation%20%28book%29
|
Gravitation is a widely adopted textbook on Albert Einstein's general theory of relativity, written by Charles W. Misner, Kip S. Thorne, and John Archibald Wheeler. It was originally published by W. H. Freeman and Company in 1973 and reprinted by Princeton University Press in 2017. It is frequently abbreviated MTW (for its authors' last names). The cover illustration, drawn by Kenneth Gwin, is a line drawing of an apple with cuts in the skin to show the geodesics on its surface.
The book contains 10 parts and 44 chapters, each beginning with a quotation. The bibliography has a long list of original sources and other notable books in the field. While this may not be considered the best introductory text because its coverage may overwhelm a newcomer, and even though parts of it are now out of date, it remains a highly valued reference for advanced graduate students and researchers.
Content
Subject matter
After a brief review of special relativity and flat spacetime, physics in curved spacetime is introduced and many aspects of general relativity are covered; particularly about the Einstein field equations and their implications, experimental confirmations, and alternatives to general relativity. Segments of history are included to summarize the ideas leading up to Einstein's theory. The book concludes by questioning the nature of spacetime and suggesting possible frontiers of research. Although the exposition on linearized gravity is detailed, one topic which is not covered is gravitoelectromagnetism. Some quantum mechanics is mentioned, but quantum field theory in curved spacetime and quantum gravity are not included.
The topics covered are broadly divided into two "tracks", the first contains the core topics while the second has more advanced content. The first track can be read independently of the second track. The main text is supplemented by boxes containing extra information, which can be omitted without loss of continuity. Margin notes are also inserted
|
https://en.wikipedia.org/wiki/CS-BLAST
|
CS-BLAST
(Context-Specific BLAST) is a tool that searches a protein sequence that extends BLAST (Basic Local Alignment Search Tool), using context-specific mutation probabilities. More specifically, CS-BLAST derives context-specific amino-acid similarities on each query sequence from short windows on the query sequences. Using CS-BLAST doubles sensitivity and significantly improves alignment quality without a loss of speed in comparison to BLAST. CSI-BLAST (Context-Specific Iterated BLAST) is the context-specific analog of PSI-BLAST (Position-Specific Iterated BLAST), which computes the mutation profile with substitution probabilities and mixes it with the query profile. CSI-BLAST (Context-Specific Iterated BLAST) is the context specific analog of PSI-BLAST (Position-Specific Iterated BLAST). Both of these programs are available as web-server and are available for free download.
Background
Homology is the relationship between biological structures or sequences derived from a common ancestor. Homologous proteins (proteins who have common ancestry) are inferred from their sequence similarity. Inferring homologous relationships involves calculating scores of aligned pairs minus penalties for gaps. Aligning pairs of proteins identify regions of similarity indicating a relationship between the two, or more, proteins. In order to have a homologous relationship, the sum of scores over all the aligned pairs of amino acids or nucleotides must be sufficiently high [2]. Standard methods of sequence comparisons use a substitution matrix to accomplish this [4]. Similarities between amino acids or nucleotides are quantified in these substitution matrices. The substitution score () of amino acids and can we written as follows:
where denotes the probability of amino acid mutating into amino acid [2]. In a large set of sequence alignments, counting the number of amino acids as well as the number of aligned pairs will allow you to derive the probabilities and .
Since
|
https://en.wikipedia.org/wiki/Gunship%20%28video%20game%29
|
Gunship is a combat flight simulation video game developed and published by MicroProse in 1986. In the game, controlling a simulated AH-64 Apache helicopter, players navigate through missions to attack enemy targets and protect friendly forces. Commercially and critically successful, Gunship was followed by Gunship 2000 and Gunship!.
Gameplay
The game features missions in five regions, including the U.S. (training), Southeast Asia (1st Air Cavalry Division), Central America (82nd Airborne Division), Middle East (101st Airborne Division) and Western Europe (3rd Armored Division). After selection of region, style, and enemies, the pilot is assigned a primary mission and a secondary mission. These could include such objectives as "Destroy enemy headquarters" or "Support friendly troops" (i.e. destroy targets near friendly forces). The latter would be an easier mission, because the battle would be fought closer to friendly lines.
The pilot then arms the Apache helicopter gunship, usually selecting AGM-114 Hellfire air-to-ground missiles (guided missiles that destroy "hard" targets such as bunkers and tanks), FFARs (Folding Fin Aerial Rockets; unguided rockets that destroy "soft" targets such as infantry and installations), and HEDP (High-Explosive, Dual-Purpose) rounds for the 30 mm cannon (an all-purpose weapon with a maximum range of 1.5 km); in Central America, the Middle East, and Western Europe, AIM-9 Sidewinders would also be standard equipment, usually as a backup air-to-air weapon in case of cannon failure.
Patient players might move in short jumps, crouching behind hills to block the enemy's line of sight and suddenly popping up to attack. More aggressive players generally fly fast and erratically to evade enemy fire, flying in low to deliver devastating cannon attacks at close range. Since flight time is a component of the mission evaluation, either method has its advantages. The latter, however, can be rather dangerous against 1st Line enemies whose fast
|
https://en.wikipedia.org/wiki/Cassette%20mutagenesis
|
Cassette mutagenesis is a type of site-directed mutagenesis that uses a short, double-stranded oligonucleotide sequence (gene cassette) to replace a fragment of target DNA. It uses complementary restriction enzyme digest ends on the target DNA and gene cassette to achieve specificity. It is different from methods that use single oligonucleotide in that a single gene cassette can contain multiple mutations. Unlike many site directed mutagenesis methods, cassette mutagenesis also does not involve primer extension by DNA polymerase.
Mechanism
First, restriction enzymes are used to cleave near the target sequence on DNA contained in a suitable vector. This step removes the target sequence and everything between the restriction sites. Then, the synthetic double stranded DNA containing the desired mutation and ends that are complementary to the restriction digest ends are ligated in place of the sequence removed. Finally, the resultant construct is sequenced to check that the target sequence contains the intended mutation.
Usage
The use of synthetic gene cassette allows total control over the type of mutation that can be generated. When studying protein functions, cassette mutagenesis can allow a scientist to change individual amino acids by introducing different codons or omitting codons.
By including the SD sequence and the first few codons of a gene, a scientist can easily and dramatically affect the expression level of a protein by altering these regulatory sequences.
Limitations
To use this method, the sequence of the target sequence and nearby restriction sites must be known. Since restriction enzymes are used, for this method to be useful, the restriction sites flanking the target DNA has to be unique in the gene/vector system so that the gene cassette can be inserted with specificity. The length of the sequence flanked by the restriction sites is also a limiting factor due to the use of synthetic gene cassettes.
Advantages
Since one gene cassette can contain
|
https://en.wikipedia.org/wiki/Video%20decoder
|
A video decoder is an electronic circuit, often contained within a single integrated circuit chip, that converts base-band analog video signals to digital video. Video decoders commonly allow programmable control over video characteristics such as hue, contrast, and saturation. A video decoder performs the inverse function of a video encoder, which converts raw (uncompressed) digital video to analog video. Video decoders are commonly used in video capture devices and frame grabbers.
Signals
The input signal to a video decoder is analog video that conforms to a standard format. For example, a standard definition (SD) decoder accepts (composite or S-Video) that conforms to SD formats such as NTSC or PAL. High definition (HD) decoders accept analog HD formats such as AHD, HD-TVI, or HD-CVI.
The output digital video may be formatted in various ways, such as 8-bit or 16-bit 4:2:2, 12-bit 4:1:1, BT.656 (SD) or BT.1120 (HD). Usually, in addition to the digital video output bus, a video decoder will also generate a clock signal and other signals such as:
Sync — indicates the beginning of a video frame
Blanking — indicates video blanking interval
Field — indicates whether the current video field is even or odd (applies to interlaced formats)
Lock — indicates the decoder has detected and is locked (synchronized) to a valid analog input video signal
Functional blocks
The main functional blocks of a video decoder typically include these:
Analog processors
Y/C (luminance/chrominance) separation
Chrominance processor
Luminance processor
Clock/timing processor
A/D converters for Y/C
Output formatter
Host communication interface
Process
Video decoding involves several processing steps. First the analog signal is digitized by an analog-to-digital converter to produce a raw, digital data stream. In the case of composite video, the luminance and chrominance are then separated; this is not necessary for S-Video sources. Next, the chrominance is demodulated to produc
|
https://en.wikipedia.org/wiki/Variance%20inflation%20factor
|
In statistics, the variance inflation factor (VIF) is the ratio (quotient) of the variance of estimating some parameter in a model that includes multiple other terms (parameters) by the variance of a model constructed using only one term. It quantifies the severity of multicollinearity in an ordinary least squares regression analysis. It provides an index that measures how much the variance (the square of the estimate's standard deviation) of an estimated regression coefficient is increased because of collinearity. Cuthbert Daniel claims to have invented the concept behind the variance inflation factor, but did not come up with the name.
Definition
Consider the following linear model with k independent variables:
Y = β0 + β1 X1 + β2 X 2 + ... + βk Xk + ε.
The standard error of the estimate of βj is the square root of the j + 1 diagonal element of s2(X′X)−1, where s is the root mean squared error (RMSE) (note that RMSE2 is a consistent estimator of the true variance of the error term, ); X is the regression design matrix — a matrix such that Xi, j+1 is the value of the jth independent variable for the ith case or observation, and such that Xi,1, the predictor vector associated with the intercept term, equals 1 for all i. It turns out that the square of this standard error, the estimated variance of the estimate of βj, can be equivalently expressed as:
where Rj2 is the multiple R2 for the regression of Xj on the other covariates (a regression that does not involve the response variable Y). This identity separates the influences of several distinct factors on the variance of the coefficient estimate:
s2: greater scatter in the data around the regression surface leads to proportionately more variance in the coefficient estimates
n: greater sample size results in proportionately less variance in the coefficient estimates
: greater variability in a particular covariate leads to proportionately less variance in the corresponding coefficient estimate
The remainin
|
https://en.wikipedia.org/wiki/Cardiovascular%20centre
|
The cardiovascular centre is a part of the human brain which regulates heart rate through the nervous and endocrine systems. It is considered one of the vital centres of the medulla oblongata.
Structure
The cardiovascular centre, or cardiovascular center, is part of the medulla oblongata of the brainstem. Normally, the heart beats without nervous control. In some situations, such as exercise, and major trauma, the cardiovascular centre is responsible for altering heart rate. It also mediates respiratory sinus arrhythmia.
Function
The cardiovascular centre responds to a variety of types of sensory information, such as:
change of blood pH, detected by central chemoreceptors.
change of blood pH, detected by peripheral chemoreceptors in the aortic bodies and in the carotid bodies.
change of blood pressure , detected by arterial baroreceptors in the aortic arch and the carotid sinuses.
various other inputs from the hypothalamus, thalamus, and cerebral cortex.
The cardiovascular centre affects changes to the heart rate by sending a nerve impulse to the cardiac pacemaker via two sets of nerves:
sympathetic fibres, part of the autonomic nervous system, to make heart rate faster.
the vagus nerve, part of the parasympathetic branch of the autonomic nervous system, to lower heart rate.
The cardiovascular centre also increases the stroke volume of the heart (that is, the amount of blood it pumps). These two changes help to regulate the cardiac output, so that a sufficient amount of blood reaches tissues. This function is so significant to normal functioning of the circulatory system that the cardiovascular centre is considered a vital centre of the medulla oblongata.
Hormones like epinephrine and norepinephrine can affect the cardiovascular centre and cause it to increase the rate of impulses sent to the sinoatrial node, resulting in faster and stronger cardiac muscle contraction. This increases heart rate.
Clinical significance
Many anaesthetics depress the acti
|
https://en.wikipedia.org/wiki/Perinatal%20mortality
|
Perinatal mortality (PNM) is the death of a fetus or neonate and is the basis to calculate the perinatal mortality rate. Perinatal means "relating to the period starting a few weeks before birth and including the birth and a few weeks after birth."
Variations in the precise definition of the perinatal mortality exist, specifically concerning the issue of inclusion or exclusion of early fetal and late neonatal fatalities. The World Health Organization defines perinatal mortality as the "number of stillbirths and deaths in the first week of life per 1,000 total births, the perinatal period commences at 22 completed weeks (154 days) of gestation, and ends seven completed days after birth", but other definitions have been used.
The UK figure is about 8 per 1,000 and varies markedly by social class with the highest rates seen in Asian women. Globally, an estimated 2.6 million neonates died in 2013 before the first month of age down from 4.5 million in 1990.
Causes
Preterm birth is the most common cause of perinatal mortality, causing almost 30 percent of neonatal deaths. Infant respiratory distress syndrome, in turn, is the leading cause of death in preterm infants, affecting about 1% of newborn infants. Birth defects cause about 21 percent of neonatal death.
Fetal mortality
Fetal mortality refers to stillbirths or fetal death. It encompasses any death of a fetus after 20 weeks of gestation or 500 gm. In some definitions of the PNM early fetal mortality (week 20–27 gestation) is not included, and the PNM may only include late fetal death and neonatal death. Fetal death can also be divided into death prior to labor, antenatal (antepartum) death, and death during labor, intranatal (intrapartum) death.
Neonatal mortality
Neonatal mortality refers to death of a live-born baby within the first 28 days of life. Early neonatal mortality refers to the death of a live-born baby within the first seven days of life, while late neonatal mortality refers to death after 7 days u
|
https://en.wikipedia.org/wiki/Windows%2011
|
Windows 11 is the latest major release of Microsoft's Windows NT operating system, released on October 5, 2021. It succeeded Windows 10 (2015) and is available for free for any Windows 10 devices that meet the new Windows 11 system requirements.
Windows 11 features major changes to the Windows shell influenced by the canceled Windows 10X, including a redesigned Start menu, the replacement of its "live tiles" with a separate "Widgets" panel on the taskbar, the ability to create tiled sets of windows that can be minimized and restored from the taskbar as a group, and new gaming technologies inherited from Xbox Series X and Series S such as Auto HDR and DirectStorage on compatible hardware. Internet Explorer (IE) has been replaced by the Chromium-based Microsoft Edge as the default web browser, like its predecessor, Windows 10, and Microsoft Teams is integrated into the Windows shell. Microsoft also announced plans to allow more flexibility in software that can be distributed via the Microsoft Store and to support Android apps on Windows 11 (including a partnership with Amazon to make its app store available for the function).
Citing security considerations, the system requirements for Windows 11 were increased over Windows 10. Microsoft only officially supports the operating system on devices using an eighth-generation Intel Core CPU or newer (with some minor exceptions), a second-generation AMD Ryzen CPU or newer, or a Qualcomm Snapdragon 850 ARM system-on-chip or newer, with UEFI and Trusted Platform Module (TPM) 2.0 supported and enabled (although Microsoft may provide exceptions to the TPM 2.0 requirement for OEMs). While the OS can be installed on unsupported processors, Microsoft does not guarantee the availability of updates. Windows 11 removed support for 32-bit x86 and 32-bit ARM CPUs and devices that use BIOS firmware.
Windows 11 has received a mostly positive reception. Pre-release coverage of the operating system focused on its stricter hardware require
|
https://en.wikipedia.org/wiki/Degaussing
|
Degaussing is the process of decreasing or eliminating a remnant magnetic field. It is named after the gauss, a unit of magnetism, which in turn was named after Carl Friedrich Gauss. Due to magnetic hysteresis, it is generally not possible to reduce a magnetic field completely to zero, so degaussing typically induces a very small "known" field referred to as bias. Degaussing was originally applied to reduce ships' magnetic signatures during World War II. Degaussing is also used to reduce magnetic fields in cathode ray tube monitors and to destroy data held on magnetic storage.
Ships' hulls
The term was first used by then-Commander Charles F. Goodeve, Royal Canadian Naval Volunteer Reserve, during World War II while trying to counter the German magnetic naval mines that were wreaking havoc on the British fleet.
The mines detected the increase in the magnetic field when the steel in a ship concentrated the Earth's magnetic field over it. Admiralty scientists, including Goodeve, developed a number of systems to induce a small "N-pole up" field into the ship to offset this effect, meaning that the net field was the same as the background. Since the Germans used the gauss as the unit of the strength of the magnetic field in their mines' triggers (not yet a standard measure), Goodeve referred to the various processes to counter the mines as "degaussing". The term became a common word.
The original method of degaussing was to install electromagnetic coils into the ships, known as coiling. In addition to being able to bias the ship continually, coiling also allowed the bias field to be reversed in the southern hemisphere, where the mines were set to detect "S-pole down" fields. British ships, notably cruisers and battleships, were well protected by about 1943.
Installing such special equipment was, however, far too expensive and difficult to service all ships that would need it, so the navy developed an alternative called wiping, which Goodeve also devised, and which
|
https://en.wikipedia.org/wiki/Wolstenholme%27s%20theorem
|
In mathematics, Wolstenholme's theorem states that for a prime number , the congruence
holds, where the parentheses denote a binomial coefficient. For example, with p = 7, this says that 1716 is one more than a multiple of 343. The theorem was first proved by Joseph Wolstenholme in 1862. In 1819, Charles Babbage showed the same congruence modulo p2, which holds for . An equivalent formulation is the congruence
for , which is due to Wilhelm Ljunggren (and, in the special case , to J. W. L. Glaisher) and is inspired by Lucas' theorem.
No known composite numbers satisfy Wolstenholme's theorem and it is conjectured that there are none (see below). A prime that satisfies the congruence modulo p4 is called a Wolstenholme prime (see below).
As Wolstenholme himself established, his theorem can also be expressed as a pair of congruences for (generalized) harmonic numbers:
(Congruences with fractions make sense, provided that the denominators are coprime to the modulus.)
For example, with p=7, the first of these says that the numerator of 49/20 is a multiple of 49, while the second says the numerator of 5369/3600 is a multiple of 7.
Wolstenholme primes
A prime p is called a Wolstenholme prime iff the following condition holds:
If p is a Wolstenholme prime, then Glaisher's theorem holds modulo p4. The only known Wolstenholme primes so far are 16843 and 2124679 ; any other Wolstenholme prime must be greater than 109. This result is consistent with the heuristic argument that the residue modulo p4 is a pseudo-random multiple of p3. This heuristic predicts that the number of Wolstenholme primes between K and N is roughly ln ln N − ln ln K. The Wolstenholme condition has been checked up to 109, and the heuristic says that there should be roughly one Wolstenholme prime between 109 and 1024. A similar heuristic predicts that there are no "doubly Wolstenholme" primes, for which the congruence would hold modulo p5.
A proof of the theorem
There is more than one way t
|
https://en.wikipedia.org/wiki/DMol3
|
{{DISPLAYTITLE:DMol3}}
DMol3 is a commercial (and academic) software package which uses density functional theory with a numerical radial function basis set to calculate the electronic properties of molecules, clusters, surfaces and crystalline solid materials
from first principles. DMol3 can either use gas phase boundary conditions or 3D periodic boundary conditions for solids or simulations of lower-dimensional periodicity. It has also pioneered the use of the conductor-like screening model COSMO Solvation Model for quantum simulations of solvated molecules and recently of wetted surfaces. DMol3 permits geometry optimisation and saddle point search with and without geometry constraints, as well as calculation of a variety of derived properties of the electronic configuration. DMol3 development started in the early eighties with B. Delley then associated with A.J. Freeman and D.E. Ellis at Northwestern University. In 1989 DMol3 appeared as DMol, the first commercial density functional package for industrial use by Biosym Technologies now Accelrys. Delley's 1990 publication was cited more than 3000 times.
See also
Quantum chemistry computer programs
External links
DMol3 datasheet
developers page
Materials Studio
|
https://en.wikipedia.org/wiki/Index%20mapping
|
Index mapping (or direct addressing, or a trivial hash function) in computer science describes using an array, in which each position corresponds to a key in the universe of possible values.
The technique is most effective when the universe of keys is reasonably small, such that allocating an array with one position for every possible key is affordable.
Its effectiveness comes from the fact that an arbitrary position in an array can be examined in constant time.
Applicable arrays
There are many practical examples of data whose valid values are restricted within a small range. A trivial hash function is a suitable choice when such data needs to act as a lookup key. Some examples include:
month in the year (1–12)
day in the month (1–31)
day of the week (1–7)
human age (0–130) – e.g. lifecover actuary tables, fixed-term mortgage
ASCII characters (0–127), encompassing common mathematical operator symbols, digits, punctuation marks, and English language alphabet
Examples
Using a trivial hash function, in a non-iterative table lookup, can eliminate conditional testing and branching completely, reducing the instruction path length of a computer program.
Avoid branching
Roger Sayle gives an example of eliminating a multiway branch caused by a switch statement:
inline bool HasOnly30Days(int m)
{
switch (m) {
case 4: // April
case 6: // June
case 9: // September
case 11: // November
return true;
default:
return false;
}
}
Which can be replaced with a table lookup:
inline bool HasOnly30Days(int m)
{
static const bool T[] = { 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0 };
return T[m-1];
}
See also
Associative array
Hash table
Lookup table
|
https://en.wikipedia.org/wiki/Ultimate%20tic-tac-toe
|
Ultimate tic-tac-toe (also known as super tic-tac-toe, meta tic-tac-toe or (tic-tac-toe)²) is a board game composed of nine tic-tac-toe boards arranged in a 3 × 3 grid. Players take turns playing on the smaller tic-tac-toe boards until one of them wins on the larger board. Compared to traditional tic-tac-toe, strategy in this game is conceptually more difficult and has proven more challenging for computers.
Rules
Just like in regular tic-tac-toe, the two players (X and O) take turns, starting with X. The game starts with X playing wherever they want in any of the 81 empty spots. Next the opponent plays, however they are forced to play in the small board indicated by the relative location of the previous move. For example, if X plays in the top right square of a small (3 × 3) board, then O has to play in the small board located at the top right of the larger board. Playing any of the available spots decides in which small board the next player plays.
If a move is played so that it is to win a small board by the rules of normal tic-tac-toe, then the entire small board is marked as won by the player in the larger board. Once a small board is won by a player or it is filled completely, no more moves may be played in that board. If a player is sent to such a board, then that player may play in any other board. Game play ends when either a player wins the larger board or there are no legal moves remaining, in which case the game is a draw.
Gameplay
Super tic-tac-toe is significantly more complex than most other variations of tic-tac-toe, as there is no clear strategy to playing. This is because of the complicated game branching in this game. Even though every move must be played in a small board, equivalent to a normal tic-tac-toe board, each move must take into account the larger board in several ways:
Anticipating the next move: Each move played in a small board determines where the opponent's next move can be played. This might make moves that are considered bad
|
https://en.wikipedia.org/wiki/National%20Pest%20Plant%20Accord
|
The National Pest Plant Accord (NPPA) is a New Zealand agreement that identifies pest plants that are prohibited from sale and commercial propagation and distribution.
The Accord initially came into effect on 1 October 2001 between regional councils and government departments with biosecurity responsibilities, but in 2006 was revised to include the Nursery and Garden Industry Association as a member of the decision-making body. Under the Accord, regional councils undertake surveillance to ensure the pest plants are not being sold, propagated or distributed.
The Department of Conservation also lists 328 vascular plant species as environmental weeds – species that infest, are controlled on, or are damaging to land under its control.
List of species
The National Pest Plant Accord is periodically updated, which was last done in 2012:
See also
Invasive species in New Zealand
|
https://en.wikipedia.org/wiki/Hitachi
|
() is a Japanese multinational electronics company headquartered in Chiyoda, Tokyo. It traces its origins back to 1910 with the establishment of a subsidiary electrical machinery manufacturing plant by Namihei Odaira within the Kuhara Mining Plant Hitachi Mine in Hitachi, Ibaraki. It became independent from the Mining Plant in 1920.
It had formed part of the Nissan zaibatsu and later DKB Group and Fuyo Group of companies before DKB and Fuji Bank (the core Fuyo Group company) merged into the Mizuho Financial Group. As of 2020, Hitachi conducts business ranging from IT, including AI, the Internet of Things, and big data, to infrastructure.
Hitachi is listed on the Tokyo Stock Exchange and Nagoya Stock Exchange and its Tokyo listing is a constituent of the Nikkei 225 and TOPIX Core30 indices. It is ranked 38th in the 2012 Fortune Global 500 and 129th in the 2012 Forbes Global 2000.
History
Founding and Early History
Hitachi was founded in 1910 by electrical engineer Namihei Odaira (1874–1951) in Ibaraki Prefecture. The company's first product was Japan's first induction motor, initially developed for use in copper mining.
The company began as an in-house venture of Fusanosuke Kuhara's mining company in Hitachi, Ibaraki. Odaira moved headquarters to Tokyo in 1918. Odaira coined the company's toponymic name by superimposing two kanji characters: hi meaning "sun" and tachi meaning "rise".
World War II had a significant impact on the company with many of its factories being destroyed by Allied bombing raids, and discord after the war. Founder Odaira was removed from the company and Hitachi Zosen Corporation was spun out. Hitachi's reconstruction efforts after the war were hindered by a labor strike in 1950. Meanwhile, Hitachi went public in 1949.
Hitachi America, Ltd. was established in 1959.
The Soviet Union started to produce air conditioners in 1975. The Baku factory was established under the license of the Japanese company, Hitachi. Volumes of production of
|
https://en.wikipedia.org/wiki/GrapeCity
|
GrapeCity, inc. is a privately held, multinational software corporation based in Sendai, Japan, that develops software products and provides outsourced product development services, consulting services, software, and Customer relationship management services. GrapeCity also has established WINEstudios, a media design and digital production facility in Japan.
GrapeCity's major office locations are in the United States, Japan, India, China, South Korea, Mongolia, Vietnam and Albania.
Corporate history
Foundation
GrapeCity, formerly Bunka Orient Corporation, was founded in Miyagi prefecture, Japan in 1980 by Paul Broman. The company was established to provide software solutions for the emerging personal computer market in Japan.
Paul Broman worked with Daniel Fanger, now Chairman of GrapeCity and Principal of MeySen Academy, and Nobuo Iwasa to turn Bunka Orient Corporation into a software development company.
Original products
LeySer School Management Software
The company's first software product was LeySer Services for school management, which Paul Broman used to help run the MeySen Academy and KeiMei Elementary School, which he founded in the 1970s. Having sent some of his teachers to learn programming, he then directed them in creating software to support the accounting and reporting needs of his schools. Later, they sold the software to other schools through what was still known as Bunka Orient Corporation.
Localization of Developer Tools
The company expanded into a range of other software products and services, beginning by localizing developer tools for the Japanese market, another empty niche they discovered when developing LeySer Services. Beginning in 1993, they grouped these localized tools together and resold them as their Power Tools line.
Localized tools included:
PowerTCP Tools (1999)
JClass (2002) and JProbe (2007) Java(TM) Components
LEADTOOLS Imaging Toolkits
FarPoint Spread (1999)
Timeline
Name change
Originally named Bunka Orient Co
|
https://en.wikipedia.org/wiki/Geometry%20of%20numbers
|
Geometry of numbers is the part of number theory which uses geometry for the study of algebraic numbers. Typically, a ring of algebraic integers is viewed as a lattice in and the study of these lattices provides fundamental information on algebraic numbers. The geometry of numbers was initiated by .
The geometry of numbers has a close relationship with other fields of mathematics, especially functional analysis and Diophantine approximation, the problem of finding rational numbers that approximate an irrational quantity.
Minkowski's results
Suppose that is a lattice in -dimensional Euclidean space and is a convex centrally symmetric body.
Minkowski's theorem, sometimes called Minkowski's first theorem, states that if , then contains a nonzero vector in .
The successive minimum is defined to be the inf of the numbers such that contains linearly independent vectors of .
Minkowski's theorem on successive minima, sometimes called Minkowski's second theorem, is a strengthening of his first theorem and states that
Later research in the geometry of numbers
In 1930-1960 research on the geometry of numbers was conducted by many number theorists (including Louis Mordell, Harold Davenport and Carl Ludwig Siegel). In recent years, Lenstra, Brion, and Barvinok have developed combinatorial theories that enumerate the lattice points in some convex bodies.
Subspace theorem of W. M. Schmidt
In the geometry of numbers, the subspace theorem was obtained by Wolfgang M. Schmidt in 1972. It states that if n is a positive integer, and L1,...,Ln are linearly independent linear forms in n variables with algebraic coefficients and if ε>0 is any given real number, then
the non-zero integer points x in n coordinates with
lie in a finite number of proper subspaces of Qn.
Influence on functional analysis
Minkowski's geometry of numbers had a profound influence on functional analysis. Minkowski proved that symmetric convex bodies induce norms in finite-dimensional vector spac
|
https://en.wikipedia.org/wiki/Cytogenetic%20notation
|
The following table summarizes symbols and abbreviations used in cytogenetics:
See also
Chromosome abnormalities
Directionality (molecular biology) for 3' and 5' notation
locus (genetics) for basic notational system
International System for Human Cytogenetic Nomenclature
|
https://en.wikipedia.org/wiki/Doppelganger%20domain
|
A doppelganger domain is a domain spelled identical to a legitimate fully qualified domain name (FQDN) but missing the dot between host/subdomain and domain, to be used for malicious purposes.
Overview
Typosquatting's traditional attack vector is through the web to distribute malware or harvest credentials. Other vectors such as email and remote access services such as SSH, RDP, and VPN also can be leveraged. In a whitepaper by Godai Group on doppelganger domains, they demonstrated that numerous emails can be harvested without anyone noticing.
Example
If someone's email address is, say, "ktrout@finance.corpudyne.com", the doppelganger domain would be "financecorpudyne.com". Hence, if someone is trying to send an email to that user and they forget the dot after "finance" (ktrout@financecorpudyne.com), it would go to the doppelganger domain instead of the legitimate user.
See also
|
https://en.wikipedia.org/wiki/John%20Vincent%20Atanasoff
|
John Vincent Atanasoff, , (October 4, 1903 – June 15, 1995) was an American physicist and inventor credited with inventing the first electronic digital computer. Atanasoff invented the first electronic digital computer in the 1930s at Iowa State College (now known as Iowa State University). Challenges to his claim were resolved in 1973 when the Honeywell v. Sperry Rand lawsuit ruled that Atanasoff was the inventor of the computer. His special-purpose machine has come to be called the Atanasoff–Berry Computer.
Early life and education
Atanasoff was born on October 4, 1903, in Hamilton, New York to an electrical engineer and a school teacher. Atanasoff's father, Ivan Atanasov, was of Bulgarian origin, born in 1876 in the village of Boyadzhik, close to Yambol, then in the Ottoman Empire. While Ivan Atanasov was still an infant, his own father was killed by Ottoman soldiers after the Bulgarian April Uprising. In 1889, Ivan immigrated to the United States with his uncle. John's father later became an electrical engineer, whereas his mother, Iva Lucena Purdy (of mixed French and Irish ancestry), was a teacher of mathematics.
Atanasoff was raised in Brewster, Florida. Young Atanasoff's ambitions and intellectual pursuits were in part influenced by his parents, whose interests in the natural and applied sciences cultivated in him a sense of critical curiosity and confidence. At the age of nine, he learned to use a slide rule, followed shortly by the study of logarithms, and subsequently completed high school at Mulberry High School in two years. In 1925, Atanasoff received his Bachelor of Science degree in electrical engineering from the University of Florida.
He continued his education at Iowa State College and in 1926 earned a master's degree in mathematics. He completed his formal education in 1930 by earning a PhD in theoretical physics from the University of Wisconsin–Madison with his thesis, The Dielectric Constant of Helium. Upon completion of his doctorate, Atan
|
https://en.wikipedia.org/wiki/Supinator%20muscle
|
In human anatomy, the supinator is a broad muscle in the posterior compartment of the forearm, curved around the upper third of the radius. Its function is to supinate the forearm.
Structure
Supinator consists of two planes of fibers, between which passes the deep branch of the radial nerve. The two planes arise in common — the superficial one by tendinous (the initial portion of the muscle is actually just tendon) and the deeper by muscular fibers — from the supinator crest of the ulna, the lateral epicondyle of humerus, the radial collateral ligament, and the annular radial ligament.
The superficial fibers (pars superficialis) surround the upper part of the radius, and are inserted into the lateral edge of the radial tuberosity and the oblique line of the radius, as low down as the insertion of the pronator teres. The upper fibers (pars profunda) of the deeper plane form a sling-like fasciculus, which encircles the neck of the radius above the tuberosity and is attached to the back part of its medial surface; the greater part of this portion of the muscle is inserted into the dorsal and lateral surfaces of the body of the radius, midway between the oblique line and the head of the bone.
The proximal aspect of the superficial head is known as the arcade of Frohse or the supinator arch.
Innervation
It is innervated by the deep branch of the radial nerve. The deep branch then becomes the posterior interosseous nerve upon exiting the supinator muscle. Its nerve roots are primarily from C6, with some C5 involvement. There is also possible additional C7 innervation.
The radial nerve divides into deep and sensory superficial branches just proximal to the supinator muscle — an arrangement that can lead to entrapment and compression of the deep part, potentially resulting in selective paralysis of the muscles served by this nerve (the extensor muscles and the abductor pollicis longus.) Many possible causes are known for this nerve syndrome, known as supinator entrapme
|
https://en.wikipedia.org/wiki/Fruit%20preserves
|
Fruit preserves are preparations of fruits whose main preserving agent is sugar and sometimes acid, often stored in glass jars and used as a condiment or spread.
There are many varieties of fruit preserves globally, distinguished by the method of preparation, type of fruit used, and place in a meal. Sweet fruit preserves such as jams, jellies, and marmalades are often eaten at breakfast with bread or as an ingredient of a pastry or dessert, whereas more savory and acidic preserves made from "vegetable fruits" such as tomato, squash or zucchini, are eaten alongside savory foods such as cheese, cold meats, and curries.
Techniques
There are several techniques of making jam, with or without added water. One factor depends on the natural pectin content of the ingredients. When making jam with low-pectin fruits like strawberries, high-pectin fruit like orange can be added, or additional pectin in the form of pectin powder, citric acid or citrus peels. Often the fruit will be heated gently in a pan to release its juices (and pectin), sometimes with a little added water, before the sugar is added. Another method is to macerate the fruits in sugar overnight and cook this down to a syrup.
Regional terminology
The term preserves is usually interchangeable with jams even though preserves contain chunks or pieces of the fruit whereas jams in some regions do not. Closely related names include: chutney, confit, conserve, fruit butter, fruit curd, fruit spread, jelly, cheese, leather and marmalade.
Some cookbooks define preserves as cooked and gelled whole fruit (or vegetable), which includes a significant portion of the fruit. In the English-speaking world, the two terms are more strictly differentiated and, when this is not the case, the more usual generic term is 'jam'.
The singular preserve or conserve is used as a collective noun for high fruit content jam, often for marketing purposes. Additionally, the name of the type of fruit preserves will also vary depending on t
|
https://en.wikipedia.org/wiki/Heinrich%20Guggenheimer
|
Heinrich Walter Guggenheimer (July 21, 1924 – March 4, 2021) was a German-born Swiss-American mathematician who has contributed to knowledge in differential geometry, topology, algebraic geometry, and convexity. He has also contributed volumes on Jewish sacred literature.
Guggenheimer was born in Nuremberg, Germany. He is the son of Marguerite Bloch and the physicist Dr. Siegfried Guggenheimer. He studied in Zürich, Switzerland at the , receiving his diploma in 1947 and a D.Sc. in 1951. His dissertation was titled "On complex analytic manifolds with Kahler metric". It was published in Commentarii Mathematici Helvetici (in German).
Guggenheimer began his teaching career at the Hebrew University as a lecturer, 1954–56. He was a professor at the Bar Ilan University, 1956–59. In 1959, he immigrated to the United States, becoming a naturalized citizen in 1965. Washington State University was his first American post, where he was an associate professor. After one year he moved to University of Minnesota where he was raised to a full professor in 1962. While in Minnesota, he wrote Differential Geometry (1963), a textbook treating "classical problems with modern methods". According to Robert Hermann in 1979, "Among today's treatises, the best one from the point of view of the Erlangen Program is Differential Geometry by H. Guggenheimer, Dover Publications, 1977."
In 1967 Guggenheimer published Plane Geometry and its Groups (Holden Day), and moved to New York City to teach at Polytechnic University, now called New York University Tandon School of Engineering. In 1977, he published Applicable Geometry: Global and Local Convexity.
Until 1995 Guggenheimer produced a steady stream of papers in mathematical journals. As a supervisor of graduate study in Minnesota and New York, he had six students proceed to Ph.D.s with theses supervised by him, two in Minnesota and four in New York. See the link to the Mathematics Genealogy Project below.
Guggenheimer has also contributed to
|
https://en.wikipedia.org/wiki/GoGuardian
|
GoGuardian is an educational technology company founded in 2014 and based in Los Angeles, California. The company's services monitor student activity online, filter content, and alert school officials to possible suicidal or self-harm ideation.
Product history
GoGuardian was founded in 2014 and is based in Los Angeles, CA. Its feature set includes computer filtering, monitoring, and management, as well as usage analytics, activity flagging, and theft recovery for ChromeOS devices. GoGuardian also offers filtering functionality for third-party tools such as YouTube.
On June 2015, GoGuardian reported it was installed in over 1,600 of the estimated 15,000 school districts in the United States.
In January 2015, Los Angeles Unified School District (LAUSD) chose GoGuardian to support their 1:1 device rollout program. This provides LAUSD device tracking and grade-level-specific filtering, and facilitates compliance with the Children's Internet Protection Act (CIPA).
In September 2015, the company released GoGuardian for Teachers, a tool to monitor student activity and control student learning. In January 2016, GoGuardian announced the launch of Google Classroom integration for GoGuardian for Teachers.
In May 2018, GoGuardian was acquired by private equity firm Sumeru Equity Partners and appointed Tony Miller to their board of directors.
In August 2018, GoGuardian launched Beacon, a software system installed on school computers that analyzes students' browsing behavior to alert people concerned of students at risk of suicide or self-harm.
In November 2020, GoGuardian merged with Pear Deck.
Student privacy
GoGuardian products allow teachers and administrators to view and snapshot students' computer screens, close and open browser tabs, and see running applications. GoGuardian can collect information about any activity when users are logged onto their accounts, including data originating from a student's webcam, microphone, keyboard, and screen, along with historical
|
https://en.wikipedia.org/wiki/Quantum%20sort
|
A quantum sort is any sorting algorithm that runs on a quantum computer. Any comparison-based quantum sorting algorithm would take at least steps, which is already achievable by classical algorithms. Thus, for this task, quantum computers are no better than classical ones, and should be disregarded when it comes to time complexity. However, in space-bounded sorts, quantum algorithms outperform their classical counterparts.
|
https://en.wikipedia.org/wiki/Arthropodicide
|
An arthropodicide is a pesticide which acts upon arthropods. The vast majority of arthropodicides used are
Insecticides
however there are other types. The second most common class is
Acaricides/miticides.
|
https://en.wikipedia.org/wiki/Ubuntu%20version%20history
|
Ubuntu releases are made semiannually by Canonical Ltd, its developers, using the year and month of the release as a version number. The first Ubuntu release, for example, was Ubuntu 4.10 and was released on 20 October 2004. Consequently, version numbers for future versions are provisional; if the release is delayed until a different month (or even year) to that planned, the version number will change accordingly.
Canonical schedules Ubuntu releases to occur approximately one month after GNOME releases, resulting in each Ubuntu release including a newer version of GNOME.
Every fourth release, occurring in the second quarter of even-numbered years, has been designated as a long-term support (LTS) release. The desktop version of LTS releases for 10.04 and earlier were supported for three years, with server version support for five years. LTS releases 12.04 and newer are freely supported for five years. Through the ESM paid option, support can be extended even longer, up to a total of ten years for 18.04. The support period for non-LTS releases is 9 months. Prior to 13.04, it had been 18 months.
Version timeline
Version end-of-life
After each version of Ubuntu has reached its end-of-life time, its repositories are removed from the main Ubuntu servers and consequently the mirrors. Older versions of Ubuntu repositories and releases can be found on the old Ubuntu releases website.
Naming convention
Ubuntu releases are also given code names, using an adjective and an animal with the same first letter – an alliteration, e.g., "Dapper Drake". With the exception of the first two releases, code names are in alphabetical order, and except for the first three releases, the first letters are sequential, allowing a quick determination of which release is newer. As of Ubuntu 17.10, however, the initial letter "rolled over" and returned to "A". Names are occasionally chosen so that animal appearance or habits reflects some new feature, e.g., "Koala's favourite leaf is Eucalyp
|
https://en.wikipedia.org/wiki/Tom%20Oberheim
|
Thomas Elroy Oberheim (born July 7, 1936, Manhattan, Kansas), known as Tom Oberheim, is an American audio engineer and electronics engineer best known for designing effects processors, analog synthesizers, sequencers, and drum machines. He has been the founder of four audio electronics companies, most notably Oberheim Electronics. He was also a key figure in the development and adoption of the MIDI standard. He is also a trained physicist.
Early life and education
Oberheim was born and raised in Manhattan, Kansas, also the home of Kansas State University. Beginning in junior high school, he put his interest in electronics into practice by building hi-fi components and amplifiers for friends. A fan of jazz music, Oberheim decided to move to Los Angeles after seeing an ad on the back of Downbeat Magazine about free jazz performances at a club there. He arrived in Los Angeles in July 1956 at the age of 20 with $10 in his pocket. He worked as a draftsman trainee at NCR Corporation where he was inspired to become a computer engineer. Oberheim enrolled at UCLA, studying computer engineering and physics while also taking music courses. Over the next nine years he worked toward his physics degree, serving in the U.S. Army for a short period of time, harmonizing with the Gregg Smith Singers, and working jobs at computer companies (most notably Abacus, where he first began designing computers).
Oberheim was attending a class during his last semester at UCLA when he met and became friends with trumpet player Don Ellis, and keyboardist Joseph Byrd of the band The United States of America, who were attending the same class. Oberheim stayed in touch with both Ellis and Byrd after leaving UCLA, and ended up building an amplifier for Ellis to use for his public address system. Oberheim also built guitar amplifiers for The United States of America, and their lead singer Dorothy Moskowitz asked him to build a ring modulator for the band (Joseph Byrd had used one while a band membe
|
https://en.wikipedia.org/wiki/Dew%20pond
|
A dew pond is an artificial pond usually sited on the top of a hill, intended for watering livestock. Dew ponds are used in areas where a natural supply of surface water may not be readily available. The name dew pond (sometimes cloud pond or mist pond) is first found in the Journal of the Royal Agricultural Society in 1865. Despite the name, their primary source of water is believed to be rainfall rather than dew or mist.
Construction
They are usually shallow, saucer-shaped and lined with puddled clay, chalk or marl on an insulating straw layer over a bottom layer of chalk or lime. To deter earthworms from their natural tendency of burrowing upwards, which in a short while would make the clay lining porous, a layer of soot would be incorporated or lime mixed with the clay. The clay is usually covered with straw to prevent cracking by the sun and a final layer of chalk rubble or broken stone to protect the lining from the hoofs of sheep or cattle. To retain more of the rainfall, the clay layer could be extended across the catchment area of the pond. If the pond's temperature is kept low, evaporation (a major water loss) may be significantly reduced, thus maintaining the collected rainwater. According to researcher Edward Martin, this may be attained by building the pond in a hollow, where cool air is likely to gather, or by keeping the surrounding grass long to enhance heat radiation. As the water level in the basin falls, a well of cool, moist air tends to form over the surface, restricting evaporation.
A method of constructing the base layer using chalk puddle was described in The Field 14 December 1907.
A Sussex farmer born in 1850 tells how he and his forefathers made dew ponds:
The initial supply of water after construction has to be provided by the builders, using artificial means. A preferred method was to arrange to finish the excavation in winter, so that any fallen snow could be collected and heaped into the centre of the pond to await melting.
Hist
|
https://en.wikipedia.org/wiki/Deuterium%20fusion
|
Deuterium fusion, also called deuterium burning, is a nuclear fusion reaction that occurs in stars and some substellar objects, in which a deuterium nucleus and a proton combine to form a helium-3 nucleus. It occurs as the second stage of the proton–proton chain reaction, in which a deuterium nucleus formed from two protons fuses with another proton, but can also proceed from primordial deuterium.
In protostars
Deuterium is the most easily fused nucleus available to accreting protostars, and such fusion in the center of protostars can proceed when temperatures exceed 106 K. The reaction rate is so sensitive to temperature that the temperature does not rise very much above this. The energy generated by fusion drives convection, which carries the heat generated to the surface.
If there were no deuterium available to fuse, then stars would gain significantly less mass in the pre-main-sequence phase, as the object would collapse faster, and more intense hydrogen fusion would occur and prevent the object from accreting matter. Deuterium fusion allows further accretion of mass by acting as a thermostat that temporarily stops the central temperature from rising above about one million degrees, a temperature not high enough for hydrogen fusion, but allowing time for the accumulation of more mass. When the energy transport mechanism switches from convective to radiative, energy transport slows, allowing the temperature to rise and hydrogen fusion to take over in a stable and sustained way. Hydrogen fusion will begin at .
The rate of energy generation is proportional to (deuterium concentration)×(density)×(temperature)11.8. If the core is in a stable state, the energy generation will be constant. If one variable in the equation increases, the other two must decrease to keep energy generation constant. As the temperature is raised to the power of 11.8, it would require very large changes in either the deuterium concentration or its density to result in even a small change i
|
https://en.wikipedia.org/wiki/Vegetable%20Production%20System
|
The Vegetable Production System (Veggie) is a plant growth system developed and used by NASA in outer space environments. The purpose of Veggie is to provide a self-sufficient and sustainable food source for astronauts as well as a means of recreation and relaxation through therapeutic gardening. Veggie was designed in conjunction with ORBITEC and is currently being used aboard the International Space Station, with another Veggie module planned to be delivered to the ISS in 2017.
Overview
Veggie is part of an overarching project concerning research on growing crops in zero gravity. Among the goals of this project are to learn about how plants grow in a weightless environment and to learn about how plants can efficiently be grown for crew use in space. Veggie was designed to be low maintenance, using low power and having a low launch mass. Thus, Veggie provides a minorly regulated environment with minimal control over the atmosphere and temperature of the module. The successor to the Veggie project is the Advanced Plant Habitat (APH), components of which will be delivered to the International Space Station during the Cygnus CRS OA-7 and SpaceX CRS-11 missions in 2017.
In 2018 the Veggie-3 experiment was tested with plant pillows and root mats. One of the goals is to grow food for crew consumption. Crops tested at this time include cabbage, lettuce, and mizuna.
Design
A Veggie module weighs less than and uses 90 watts. It consists of three parts: a lighting system, a bellows enclosure, and a reservoir. The lighting system regulates the amount and intensity of light plants receive, the bellows enclosure keeps the environment inside the unit separate from its surroundings, and the reservoir connects to plant pillows where the seeds grow.
Lighting system
Veggie's lighting system consists of three different types of coloreds LEDs: red, blue, and green. Each color corresponds to a different light intensity that the plants will receive. Although the lighting syst
|
https://en.wikipedia.org/wiki/Gray%20graph
|
In the mathematical field of graph theory, the Gray graph is an undirected bipartite graph with 54 vertices and 81 edges. It is a cubic graph: every vertex touches exactly three edges. It was discovered by Marion C. Gray in 1932 (unpublished), then discovered independently by Bouwer 1968 in reply to a question posed by Jon Folkman 1967. The Gray graph is interesting as the first known example of a cubic graph having the algebraic property of being edge but not vertex transitive (see below).
The Gray graph has chromatic number 2, chromatic index 3, radius 6 and diameter 6. It is also a 3-vertex-connected and 3-edge-connected non-planar graph.
Construction
The Gray graph can be constructed from the 27 points of a 3 × 3 × 3 grid and the 27 axis-parallel lines through these points. This collection of points and lines forms a projective configuration: each point has exactly three lines through it, and each line has exactly three points on it. The Gray graph is the Levi graph of this configuration; it has a vertex for every point and every line of the configuration, and an edge for every pair of a point and a line that touch each other. This construction generalizes (Bouwer 1972) to any dimension n ≥ 3, yielding an n-valent Levi graph with algebraic properties similar to those of the Gray graph. In (Monson, Pisanski, Schulte, Ivic-Weiss 2007), the Gray graph appears as a different sort of Levi graph for the edges and triangular faces of a certain locally toroidal abstract regular 4-polytope. It is therefore the first in an infinite family of similarly constructed cubic graphs. As with other Levi graphs, it is a bipartite graph, with the vertices corresponding to points on one side of the bipartition and the vertices corresponding to lines on the other side.
Marušič and Pisanski (2000) give several alternative methods of constructing the Gray graph. As with any bipartite graph, there are no odd-length cycles, and there are also no cycles of four or six vertices, so th
|
https://en.wikipedia.org/wiki/Orthogonal%20group
|
In mathematics, the orthogonal group in dimension , denoted , is the group of distance-preserving transformations of a Euclidean space of dimension that preserve a fixed point, where the group operation is given by composing transformations. The orthogonal group is sometimes called the general orthogonal group, by analogy with the general linear group. Equivalently, it is the group of orthogonal matrices, where the group operation is given by matrix multiplication (an orthogonal matrix is a real matrix whose inverse equals its transpose). The orthogonal group is an algebraic group and a Lie group. It is compact.
The orthogonal group in dimension has two connected components. The one that contains the identity element is a normal subgroup, called the special orthogonal group, and denoted . It consists of all orthogonal matrices of determinant 1. This group is also called the rotation group, generalizing the fact that in dimensions 2 and 3, its elements are the usual rotations around a point (in dimension 2) or a line (in dimension 3). In low dimension, these groups have been widely studied, see , and . The other component consists of all orthogonal matrices of determinant . This component does not form a group, as the product of any two of its elements is of determinant 1, and therefore not an element of the component.
By extension, for any field , an matrix with entries in such that its inverse equals its transpose is called an orthogonal matrix over . The orthogonal
matrices form a subgroup, denoted , of the general linear group ; that is
More generally, given a non-degenerate symmetric bilinear form or quadratic form on a vector space over a field, the orthogonal group of the form is the group of invertible linear maps that preserve the form. The preceding orthogonal groups are the special case where, on some basis, the bilinear form is the dot product, or, equivalently, the quadratic form is the sum of the square of the coordinates.
All orthogonal gr
|
https://en.wikipedia.org/wiki/Total%20indicator%20reading
|
In metrology and the fields that it serves (such as manufacturing, machining, and engineering), total indicator reading (TIR), also known by the newer name full indicator movement (FIM), is the difference between the maximum and minimum measurements, that is, readings of an indicator, on the planar, cylindrical, or contoured surface of a part, showing its amount of deviation from flatness, roundness (circularity), cylindricity, concentricity with other cylindrical features, or similar conditions. The indicator traditionally would be a dial indicator; today dial-type and digital indicators coexist.
The earliest expansion of "TIR" was total indicated run-out and concerned cylindrical or tapered (conical) parts, where "run-out" (noun) refers to any imperfection of form that causes a rotating part such as a shaft to "run out" (verb), that is, to not rotate with perfect smoothness. These conditions include being out-of-round (that is, lacking sufficient roundness); eccentricity (that is, lacking sufficient concentricity); or being bent axially (regardless of whether the surfaces are perfectly round and concentric at every cross-sectional point). The purpose of emphasizing the "total" in TIR was to duly maintain the distinction between per-side differences and both-sides-considered differences, which requires perennial conscious attention in lathe work. For example, all depths of cut in lathe work must account for whether they apply to the radius (that is, per side) or to the diameter (that is, total). Similarly, in shaft-straightening operations, where calibrated amounts of bending force are applied laterally to the shaft, the "total" emphasis corresponds to a bend of half that magnitude. If a shaft has 0.1 mm TIR, it is "out of straightness" by half that total, i.e., 0.05 mm.
Today TIR in its more inclusive expansion, "total indicator reading", concerns all kinds of features, from round to flat to contoured. One example of how the "total" emphasis can apply to flat su
|
https://en.wikipedia.org/wiki/CAPRISA%20004
|
CAPRISA 004 is the name of a clinical trial conducted by CAPRISA. This particular study was the first to show that a topical gel could reduce a person's risk of contracting HIV. The gel used in the study contained a microbicide.
Background
A previous study had measured the safety and tolerability of tenofovir in both sexually active and abstinent women. This study gave support to the idea that tenofovir was a drug which was worth examining as an HIV preventative.
Study design
CAPRISA 004 was a phase IIb, double-blind, randomized, placebo-controlled study comparing 1% tenofovir gel with a placebo gel. 900 young women who were judged to be at risk of contracting HIV volunteered to use a study gel in their vaginas, with half of those receiving the microbicide gel and the other half getting the placebo (according to their randomization results). The study asked participants to apply a first dose of the gel within 12 hours before having sex and to apply another dose within 12 hours after sex. All study volunteers participated in HIV risk reduction counseling and received condoms. The study assisted in arranging treatment for any sexually transmitted infections that participants contracted.
The study began in May 2007, was completed in December 2009, and the data collected was published in March 2010. The study design had expected the study to last for 30 months, with about 14 months to recruit study volunteers then with follow-up until 92 participants were observed to have become infected with HIV. The entrance criteria were such that, based on risk factors in the participants' lifestyles, the study expected 92 infections to occur approximately 16 months after they recruited the final volunteer.
Results
Researchers led by Quarraisha Karim found that a microbicide containing 1% tenofovir was, for women participating in the trial, 39% effective in reducing risk of contracting HIV during sex and 51% effective in preventing genital herpes infections.
Responses
The re
|
https://en.wikipedia.org/wiki/ERCC4
|
ERCC4 is a protein designated as DNA repair endonuclease XPF that in humans is encoded by the ERCC4 gene. Together with ERCC1, ERCC4 forms the ERCC1-XPF enzyme complex that participates in DNA repair and DNA recombination.
The nuclease enzyme ERCC1-XPF cuts specific structures of DNA. Many aspects of these two gene products are described together here because they are partners during DNA repair. The ERCC1-XPF nuclease is an essential activity in the pathway of DNA nucleotide excision repair (NER). The ERCC1-XPF nuclease also functions in pathways to repair double-strand breaks in DNA, and in the repair of "crosslink" damage that harmfully links the two DNA strands.
Cells with disabling mutations in ERCC4 are more sensitive than normal to particular DNA damaging agents, including ultraviolet radiation and to chemicals that cause crosslinking between DNA strands. Genetically engineered mice with disabling mutations in ERCC4 also have defects in DNA repair, accompanied by metabolic stress-induced changes in physiology that result in premature aging. Complete deletion of ERCC4 is incompatible with viability of mice, and no human individuals have been found with complete (homozygous) deletion of ERCC4. Rare individuals in the human population harbor inherited mutations that impair the function of ERCC4. When the normal genes are absent, these mutations can lead to human syndromes, including xeroderma pigmentosum, Cockayne syndrome and Fanconi anemia.
ERCC1 and ERCC4 are the human gene names and Ercc1 and Ercc4 are the analogous mammalian gene names. Similar genes with similar functions are found in all eukaryotic organisms.
Gene
The human ERCC4 gene can correct the DNA repair defect in specific ultraviolet light (UV)-sensitive mutant cell lines derived from Chinese hamster ovary cells. Multiple independent complementation groups of Chinese hamster ovary (CHO) cells have been isolated, and this gene restored UV resistance to cells of complementation group 4. Refle
|
https://en.wikipedia.org/wiki/Jones%27%20stain
|
Jones' stain, also Jones stain, is a methenamine silver-Periodic acid-Schiff stain used in pathology. It is also referred to as methenamine PAS which is commonly abbreviated MPAS.
It stains for basement membrane and is widely used in the investigation of medical kidney diseases.
The Jones stain demonstrates the spiked GBM, caused by subepithelial deposits, seen in membranous nephropathy.
See also
Staining
|
https://en.wikipedia.org/wiki/British%20ensign
|
In British maritime law and custom, an ensign is the identifying flag flown to designate a British ship, either military or civilian. Such flags display the United Kingdom Union Flag in the canton (the upper corner next to the staff), with either a red, white or blue field, dependent on whether the vessel is civilian, naval, or in a special category. These are known as the red, white, and blue ensigns respectively.
Outside the nautical sphere, ensigns are used to designate many other military units, government departments and administrative divisions. These flags are modelled on the red, white, and blue naval ensigns, but may use different colours for the field, and be defaced by the addition of a badge or symbol, for example the sky blue with concentric red, white and blue circles of the Royal Air Force ensign.
The Union Flag (also known as the Union Jack) should be flown as a jack by Royal Navy ships only when moored or at anchor. If flown while underway, the ship must be dressed for a special occasion or celebration with masthead ensigns, otherwise it signals that the Monarch or an Admiral of the Fleet is on board. The Union Flag may also signal that a court martial is in progress.
The use of the Union Flag as an ensign on a civilian craft is still illegal, unless it has a white border, ever since Charles I ordered it be restricted to His Majesty's ships "upon pain of Our high displeasure" in the 17th century, mainly due to its unauthorised use by merchant mariners to avoid paying harbour duties by passing themselves off as Royal vessels.
Modern usage
British ensigns currently in use can be classified into five categories, in descending order of exclusivity:
the White Ensign
the Blue Ensign
the Blue Ensign defaced
the Red Ensign
the Red Ensign defaced
The traditional order of seniority was red, white and blue, with the red as the senior ensign.
White
Today's white ensign, as used by Royal Navy ships, incorporates the St George's Cross (St George's E
|
https://en.wikipedia.org/wiki/Micro%20perforated%20plate
|
A micro perforated plate (MPP) is a device used to absorb sound, reducing its intensity. It consists of a thin flat plate, made from one of several different materials, with small holes punched in it. An MPP offers an alternative to traditional sound absorbers made from porous materials.
Structure
An MPP is normally 0.5–2 mm thick. The holes typically cover 0.5 to 2% of the plate, depending on the application and the environment in which the MPP is to be mounted. Hole diameter is usually less than 1 millimeter, typically 0.05 to 0.5 mm. They are usually made using the microperforation process.
Operating principle
The goal of a sound absorber is to convert acoustical energy into heat. In a traditional absorber, the sound wave propagates into the absorber. Because of the proximity of the porous material, the oscillating air molecules inside the absorber lose their acoustical energy due to friction.
A MPP works in almost the same way. When the oscillating air molecules penetrate the MPP, the friction between the air in motion and the surface of the MPP dissipates the acoustical energy.
Comparison with other materials
Traditional sound absorbers are porous materials such as mineral wool, glass or polyester fibres. It is not possible to use these materials in harsh environments such as engine compartments. Traditional absorbers have many drawbacks, including pollution, the risk of fire, and problems with the useful lifetime of the absorbing material.
The main reason why Micro Perforates have become so popular among acousticians is that they have a good absorption performance but without the disadvantages of a porous material. Furthermore, an MPP is also preferable from an aesthetic point of view.
History
For a while, perforated metal panels with holes in the 1–10 mm range have been used as a cage for sound-absorbing glass-fiber bats where large holes let the sound waves reach into the absorbent fiber. Another use has been the creation of narrowband Helmholtz
|
https://en.wikipedia.org/wiki/WPIX
|
WPIX (channel 11) is a television station in New York City, serving as the de facto flagship of The CW Television Network. Owned by Mission Broadcasting, the station is operated by CW majority owner Nexstar Media Group under a local marketing agreement (LMA). Since its inception in 1948, WPIX's studios and offices have been located in the Daily News Building on East 42nd Street (also known as "11 WPIX Plaza") in Midtown Manhattan. The station's transmitter is located at the Empire State Building.
WPIX is also available as a regional superstation via satellite and cable in the United States and Canada. It is the largest Nexstar-operated station by population of market size.
History
As an independent station (1948–1995)
The station first signed on the air on June 15, 1948; it was the fifth television station to sign on in New York City and was the market's second independent station. It was also the second of three stations to launch in the New York market during 1948, debuting one month after Newark, New Jersey–based independent WATV (channel 13, now WNET) and two months before WJZ-TV (channel 7, now WABC-TV). WPIX's call letters come from the slogan of the newspaper which founded the station, the New York Daily News, whose slogan was "New York's Picture Newspaper". The Daily Newss partial corporate parent was the Chicago-based Tribune Company, publishers of the Chicago Tribune.
Until becoming owned outright by Tribune in 1991, WPIX operated separately from the company's other television and radio outlets (including WGN-TV in Chicago, which signed-on two months before WPIX in April 1948) through the News-owned license holder, WPIX, Incorporated – which in 1963, purchased New York radio station, WBFM (101.9 FM) and soon changed that station's call letters to WPIX-FM. British businessman Robert Maxwell bought the Daily News in 1991. Tribune retained WPIX and WQCD; the radio station was sold to Emmis Communications in 1997 (it is now WFAN-FM). WPIX initially feature
|
https://en.wikipedia.org/wiki/Particle%20number
|
In thermodynamics, the particle number (symbol ) of a thermodynamic system is the number of constituent particles in that system. The particle number is a fundamental thermodynamic property which is conjugate to the chemical potential. Unlike most physical quantities, the particle number is a dimensionless quantity, specifically a countable quantity. It is an extensive property, as it is directly proportional to the size of the system under consideration and thus meaningful only for closed systems.
A constituent particle is one that cannot be broken into smaller pieces at the scale of energy involved in the process (where is the Boltzmann constant and is the temperature). For example, in a thermodynamic system consisting of a piston containing water vapour, the particle number is the number of water molecules in the system. The meaning of constituent particles, and thereby of particle numbers, is thus temperature-dependent.
Determining the particle number
The concept of particle number plays a major role in theoretical considerations. In situations where the actual particle number of a given thermodynamical system needs to be determined, mainly in chemistry, it is not practically possible to measure it directly by counting the particles. If the material is homogeneous and has a known amount of substance n expressed in moles, the particle number N can be found by the relation :
,
where NA is the Avogadro constant.
Particle number density
A related intensive system parameter is the particle number density, a quantity of kind volumetric number density obtained by dividing the particle number of a system by its volume. This parameter is often denoted by the lower-case letter n.
In quantum mechanics
In quantum mechanical processes, the total number of particles may not be preserved. The concept is therefore generalized to the particle number operator, that is, the observable that counts the number of constituent particles. In quantum field theory, the particle num
|
https://en.wikipedia.org/wiki/Electric%20machine
|
In electrical engineering, electric machine is a general term for machines using electromagnetic forces, such as electric motors, electric generators, and others. They are electromechanical energy converters: an electric motor converts electricity to mechanical power while an electric generator converts mechanical power to electricity. The moving parts in a machine can be rotating (rotating machines) or linear (linear machines). Besides motors and generators, a third category often included is transformers, which although they do not have any moving parts are also energy converters, changing the voltage level of an alternating current.
Electric machines, in the form of synchronous and induction generators, produce about 95% of all electric power on Earth (as of early 2020s), and in the form of electric motors consume approximately 60% of all electric power produced. Electric machines were developed beginning in the mid 19th century and since that time have been a ubiquitous component of the infrastructure. Developing more efficient electric machine technology is crucial to any global conservation, green energy, or alternative energy strategy.
Generator
An electric generator is a device that converts mechanical energy to electrical energy. A generator forces electrons to flow through an external electrical circuit. It is somewhat analogous to a water pump, which creates a flow of water but does not create the water inside. The source of mechanical energy, the prime mover, may be a reciprocating or turbine steam engine, water falling through a turbine or waterwheel, an internal combustion engine, a wind turbine, a hand crank, compressed air or any other source of mechanical energy.
The two main parts of an electrical machine can be described in either mechanical or electrical terms. In mechanical terms, the rotor is the rotating part, and the stator is the stationary part of an electrical machine. In electrical terms, the armature is the power-producing compo
|
https://en.wikipedia.org/wiki/Universo%20Online
|
(Portuguese for "Online Universe") (known by the acronym UOL) is a Brazilian web content, products and services company. It belongs to Grupo Folha enterprise.
In 2012, UOL was the fifth most visited website in Brazil, below only Google portals (Google Brasil, Google USA, YouTube) and Facebook. According to Ibope Nielsen Online, UOL is Brazil's largest internet portal with more than 50 million unique visitors and 6.7 billion page views every month.
Overview
UOL is the world's largest Portuguese speaking portal, which is organized in 42 thematic stations with more than 1,000 news sources and 7 million pages. The portal provides website hosting, data storage, publicity dealing, online payments and security systems. It also holds more than 300 thousand online shops, 23 million buyers and 4 million people selling goods and services in its portals
UOL includes:
UOL Cliques, ads and publicity portal.
Radar de Descontos, group buying portal.
Emprego Certo, jobs portal.
Shopping UOL, online price comparing tool.
UOL Segurança Online, online safety firm.
Universidade UOL, online education portal.
UOL Revelação Digital, online photo developing portal.
Toda Oferta, buying and selling portal.
UOL Wi-Fi, unlimited wireless broadband Internet access.
PagSeguro, e-commerce tool in which shops and people can pay and cash online payments.
UOL Mais, portal with unlimited space for videos, photos, audio and texts.
UOL HOST, hosting and cloud computing firm.
UOL Assistência Técnica, technical support services for computers, tablets, and smartphones.
UOL DIVEO, online IT outsourcing firm.
UOL Afiliados, membership program for subscribers and non-subscribers. The program pays UOL associate that remunerates websites and blogs that disclose ads. Each associate receives a quantity per clicks received in each ad or signature conversion.
History
UOL was established by Grupo Folha on April 28, 1996. After 7 months, UOL joined portal Brasil Online (BOL) from Editora Abril. However Editora Ab
|
https://en.wikipedia.org/wiki/Colicin
|
A colicin is a type of bacteriocin produced by and toxic to some strains of Escherichia coli. Colicins are released into the environment to reduce competition from other bacterial strains. Colicins bind to outer membrane receptors, using them to translocate to the cytoplasm or cytoplasmic membrane, where they exert their cytotoxic effect, including depolarisation of the cytoplasmic membrane, DNase activity, RNase activity, or inhibition of murein synthesis.
Structure
Channel-forming colicins (colicins A, B, E1, Ia, Ib, and N) are transmembrane proteins that depolarize the cytoplasmic membrane, leading to dissipation of cellular energy. These colicins contain at least three domains: an N-terminal translocation domain responsible for movement across the outer membrane and periplasmic space (T domain); a central domain responsible for receptor recognition (R domain); and a C-terminal cytotoxic domain responsible for channel formation in the cytoplasmic membrane (C domain). R domain regulates the target and binds to the receptor on the sensitive cell. T domain is involved in translocation, co-opting the machinery of the target cell. The C domain is the 'killing' domain and may produce a pore in the target cell membrane, or act as a nuclease to chop up the DNA or RNA of the target cell.
Translocation
Most colicins are able to translocate the outer membrane by a two-receptor system, where one receptor is used for the initial binding and the second for translocation. The initial binding is to cell surface receptors such as the outer membrane proteins OmpF, FepA, BtuB, Cir and FhuA; colicins have been classified according to which receptors they bind to. The presence of specific periplasmic proteins, such as TolA, TolB, TolC, or TonB, are required for translocation across the membrane. Cloacin DF13 is a bacteriocin that inactivates ribosomes by hydrolysing 16S RNA in 30S ribosomes at a specific site.
Resistance
Because they target specific receptors and use specific tran
|
https://en.wikipedia.org/wiki/Lagrange%27s%20identity%20%28boundary%20value%20problem%29
|
In the study of ordinary differential equations and their associated boundary value problems, Lagrange's identity, named after Joseph Louis Lagrange, gives the boundary terms arising from integration by parts of a self-adjoint linear differential operator. Lagrange's identity is fundamental in Sturm–Liouville theory. In more than one independent variable, Lagrange's identity is generalized by Green's second identity.
Statement
In general terms, Lagrange's identity for any pair of functions u and v in function space C2 (that is, twice differentiable) in n dimensions is:
where:
and
The operator L and its adjoint operator L* are given by:
and
If Lagrange's identity is integrated over a bounded region, then the divergence theorem can be used to form Green's second identity in the form:
where S is the surface bounding the volume Ω and n is the unit outward normal to the surface S.
Ordinary differential equations
Any second order ordinary differential equation of the form:
can be put in the form:
This general form motivates introduction of the Sturm–Liouville operator L, defined as an operation upon a function f such that:
It can be shown that for any u and v for which the various derivatives exist, Lagrange's identity for ordinary differential equations holds:
For ordinary differential equations defined in the interval [0, 1], Lagrange's identity can be integrated to obtain an integral form (also known as Green's formula):
where , , and are functions of . and having continuous second derivatives on the
Proof of form for ordinary differential equations
We have:
and
Subtracting:
The leading multiplied u and v can be moved inside the differentiation, because the extra differentiated terms in u and v are the same in the two subtracted terms and simply cancel each other. Thus,
which is Lagrange's identity. Integrating from zero to one:
as was to be shown.
|
https://en.wikipedia.org/wiki/Riak
|
Riak (pronounced "ree-ack" ) is a distributed NoSQL key-value data store that offers high availability, fault tolerance, operational simplicity, and scalability. Riak moved to an entirely open-source project in August 2017, with many of the licensed Enterprise Edition features being incorporated. Riak implements the principles from Amazon's Dynamo paper with heavy influence from the CAP theorem. Written in Erlang, Riak has fault-tolerant data replication and automatic data distribution across the cluster for performance and resilience.
Riak has a pluggable backend for its core storage, with the default storage backend being Bitcask. LevelDB is also supported, with other options (such as the pure-Erlang Leveled) available depending on the version.
Riak was originally developed by engineers employed by Basho Technologies and maintained by them until 2017 when the rights were sold to bet365 after Basho went into receivership.
Main features
Fault-tolerant availability Riak replicates key/value stores across a cluster of nodes with a default n_val of three. In the case of node outages due to network partition or hardware failures, data can still be written to a neighboring node beyond the initial three, and read-back due to its "masterless" peer-to-peer architecture.
Queries Riak provides a REST-ful API through HTTP and Protocol Buffers for basic PUT, GET, POST, and DELETE functions. More complex queries are also possible, including secondary indexes, search (via Apache Solr), and MapReduce. MapReduce has native support for both JavaScript (using the SpiderMonkey runtime) and Erlang.
Predictable latency Riak distributes data across nodes with hashing and can provide latency profile, even in the case of multiple node failures.
Storage options Keys/values can be stored in memory, disk, or both.
Multi-datacenter replication Multi-Datacenter replication (MDC) provides uni-directional and bi-direction replication of data between Riak clusters, whether locally for resi
|
https://en.wikipedia.org/wiki/Robert%20John%20Audley
|
Robert John Audley was a British psychologist whose research was concerned with choice and decision-making.
Career
Robert (Bob) Audley was born in London in 1925. Following national service, he obtained his BSc from University College London in 1949. Among his lecturers was E.S. Pearson. He then obtained a Fulbright scholarship which took him to Washington State University. On his return he completed a PhD supervised by A.R. Jonckheere at UCL. He was appointed to the faculty and remained there for the whole of his academic career. He served as head of department.
He was active in the British Psychological Society of which he became president in 1981. His Presidential address was on the subject of choice. He was Editor of the British Journal of Mathematical and Statistical Psychology from 1963 to 1969. He was also President of the Experimental Psychology Society about which he was interviewed.
Research
There were three strands to his research. The first strand, as a mathematical psychologist, he developed a Theory of Choice to explain the process of decision-making (Audley, 1960; Audley & Pike, 1965). His second strand was on reaction time (Audley, Caudrey, Howell and Powell, 1975) and the third was on medical accidents (Audley, Vincents & Ennis, 1993).
Publications
Audley, R.J. (1960). A stochastic model for individual choice behaviour.
Audley, R.J., & Pike, A.R. (1965). Some alternative models of choice.
Audley, R.J. (1970). Choosing.
Audley RJ; Caudrey DJ; Howell P; Powell DJ (1975) Reaction Time Exchange Functions in Choice Tasks. In Attention and Performance V, (pp. 281–295).
Audley, R.J., Vincent, C., & Ennis, M. (Eds)(1993) Medical Accidents. OUP.
Positions
1969: President, British Psychological Society
1975: President, Experimental Psychology Society
|
https://en.wikipedia.org/wiki/Exogenote
|
An exogenote is a piece of donor DNA that is involved in the mating of prokaryotic organisms.
Transferred DNA of Hfr (high frequency of recombination) is called exogenote and homologous part of F (fertility factor) genophore is called endogenote. An exogenote is genetic material that is released into the environment by prokaryotic cells, usually upon their lysis. This exogenous genetic material is then free to be taken up by other competent bacteria, and used as a template for protein synthesis or broken down for its molecules to be used elsewhere in the cell. Taking up genetic material into the cell from the surrounding environment is a form of bacterial transformation. Exogenotes can also be transferred directly from donor to recipient bacteria as an F'-plasmid in a process known as bacterial conjugation. F'-plasmids only form if the F+ factor is incorrectly translated, and results in a small amount of donor DNA erroneously transferring to the recipient with very high efficiency.
|
https://en.wikipedia.org/wiki/Jean%20M.%20Carlson
|
Jean Marie Carlson (born 1962) is a professor of complexity at the University of California, Santa Barbara. She studies robustness and feedback in highly connected complex systems, which have applications in a variety of areas including earthquakes, wildfires and neuroscience.
Early life and education
Carlson studied electrical engineering and computer science at Princeton University and graduated in 1984. She moved to Cornell University for her graduate studies, earning a master's in applied physics 1987. In 1987 she switched to theoretical condensed matter physics for her doctoral studies and completed her PhD in 1988. She worked under the supervision of James Sethna on the spin glass model in the Bethe Lattice. Carlson worked in the Kavli Institute for Theoretical Physics as a postdoctoral scholar with James S. Langer.
Research and career
Carlson was appointed to the faculty at the University of California, Santa Barbara in 1990. She works on the fundamental theory and applications of complex systems. She was awarded a David and Lucile Packard Foundation fellowship in 1993, which allowed her to study the physical and mathematical principles that underlie complexity. Carlson uses highly optimized tolerance (HOT) methods that connect evolving structure with power laws in highly interconnected systems. Carlson developed the HOT mechanism in the early 2000s, and has since applied it to complex systems including the immune system, earthquakes, wildfires and neuroscience. HOT represents a unifying framework that can couple with external environments, which differs from self-organized criticality and the edge of chaos.
Carlson has used computational systems biology to understand the immune system. She studies how the immune system changes with age, as well as autoimmune disease and homeostasis. Carlson worked with Eric Jones to develop a mathematical model that can analyse and predict interactions in the gut bacteria of fruit flies. It is hoped that this model will
|
https://en.wikipedia.org/wiki/Transcription%20coregulator
|
In molecular biology and genetics, transcription coregulators are proteins that interact with transcription factors to either activate or repress the transcription of specific genes. Transcription coregulators that activate gene transcription are referred to as coactivators while those that repress are known as corepressors. The mechanism of action of transcription coregulators is to modify chromatin structure and thereby make the associated DNA more or less accessible to transcription. In humans several dozen to several hundred coregulators are known, depending on the level of confidence with which the characterisation of a protein as a coregulator can be made. One class of transcription coregulators modifies chromatin structure through covalent modification of histones. A second ATP dependent class modifies the conformation of chromatin.
Histone acetyltransferases
Nuclear DNA is normally tightly wrapped around histones rendering the DNA inaccessible to the general transcription machinery and hence this tight association prevents transcription of DNA. At physiological pH, the phosphate component of the DNA backbone is deprotonated which gives DNA a net negative charge. Histones are rich in lysine residues which at physiological pH are protonated and therefore positively charged. The electrostatic attraction between these opposite charges is largely responsible for the tight binding of DNA to histones.
Many coactivator proteins have intrinsic histone acetyltransferase (HAT) catalytic activity or recruit other proteins with this activity to promoters. These HAT proteins are able to acetylate the amine group in the sidechain of histone lysine residues which makes lysine much less basic, not protonated at physiological pH, and therefore neutralizes the positive charges in the histone proteins. This charge neutralization weakens the binding of DNA to histones causing the DNA to unwind from the histone proteins and thereby significantly increases the rate of tr
|
https://en.wikipedia.org/wiki/Restriction%20digest
|
A restriction digest is a procedure used in molecular biology to prepare DNA for analysis or other processing. It is sometimes termed DNA fragmentation, though this term is used for other procedures as well. In a restriction digest, DNA molecules are cleaved at specific restriction sites of 4-12 nucleotides in length by use of restriction enzymes which recognize these sequences.
The resulting digested DNA is very often selectively amplified using polymerase chain reaction (PCR), making it more suitable for analytical techniques such as agarose gel electrophoresis, and chromatography. It is used in genetic fingerprinting, plasmid subcloning, and RFLP analysis.
Restriction site
A given restriction enzyme cuts DNA segments within a specific nucleotide sequence, at what is called a restriction site. These recognition sequences are typically four, six, eight, ten, or twelve nucleotides long and generally palindromic (i.e. the same nucleotide sequence in the 5' – 3' direction). Because there are only so many ways to arrange the four nucleotides that compose DNA (Adenine, Thymine, Guanine and Cytosine) into a four- to twelve-nucleotide sequence, recognition sequences tend to occur by chance in any long sequence. Restriction enzymes specific to hundreds of distinct sequences have been identified and synthesized for sale to laboratories, and as a result, several potential "restriction sites" appear in almost any gene or locus of interest on any chromosome. Furthermore, almost all artificial plasmids include a (often entirely synthetic) polylinker (also called "multiple cloning site") that contains dozens of restriction enzyme recognition sequences within a very short segment of DNA. This allows the insertion of almost any specific fragment of DNA into plasmid vectors, which can be efficiently "cloned" by insertion into replicating bacterial cells.
After restriction digest, DNA can then be analysed using agarose gel electrophoresis. In gel electrophoresis, a sample of
|
https://en.wikipedia.org/wiki/Audio%20Signal%20Processor
|
The Audio Signal Processor (ASP) is a large-scale digital signal processor developed by James A. Moorer at Lucasfilm's The Droid Works. Moorer programmed a number of digital signal processing algorithms that were used in major motion picture features. Sounds processed by the ASP were used in the THX logo's Deep Note, Return of the Jedi, Indiana Jones and the Temple of Doom, and others.
ASP provided the technological basis for the SoundDroid.
|
https://en.wikipedia.org/wiki/Pillai%20sequence
|
The Pillai sequence is the sequence of integers that have a record number of terms in their greedy representations as sums of prime numbers (and one).
It is named after Subbayya Sivasankaranarayana Pillai, who first defined it in 1930.
It would follow from Goldbach's conjecture that every integer greater than one can be represented as a sum of at most three prime numbers. However, finding such a representation could involve solving instances of the subset sum problem, which is computationally difficult. Instead, Pillai considered the following simpler greedy algorithm for finding a representation of as a sum of primes: choose the first prime in the sum to be the largest prime that is at most , and then recursively construct the remaining sum recursively for .
If this process reaches zero, it halts. And if it reaches one instead of zero,
it must include one in the sum (even though it is not prime), and then halt.
For instance, this algorithm represents 122 as 113 + 7 + 2, even though the shorter representations 61 + 61 or 109 + 13 are also possible.
The th number in the Pillai sequence is the smallest number whose greedy representation as a sum of primes (and one) requires terms. These numbers are
0, 1, 4, 27, 1354, 401429925999155061, ...
Each number in the sequence is the sum of the previous number with a prime number , the smallest prime whose following prime gap is larger than . For instance, the number 27 in the sequence is 4 + 23, where the first prime gap larger than 4 is the one between 23 and 29.
Because the prime numbers become less dense as they become larger (as quantified by the prime number theorem), there is always a prime gap larger than any term in the Pillai sequence, so the sequence continues to an infinite number of terms. However, the terms in the sequence grow very rapidly. It has been estimated that expressing the next term in the sequence would require "hundreds of millions of digits".
|
https://en.wikipedia.org/wiki/C%C3%A9line%20B%C5%93hm
|
Céline Bœhm is a Professor of Particle Physics at the University of Sydney. She works on astroparticle physics and dark matter.
Early life and education
Bœhm studied fundamental physics at the Pierre and Marie Curie University, graduating in 1997. She joined École Polytechnique, where she obtained a Master in Engineering in 1998. She earned the highest distinction for a postgraduate diploma in theoretical physics. She completed her PhD at the École normale supérieure in Paris in 2001, working with Pierre Fayet. She worked on supersymmetry, in the 4-body decay of the stop particle. She studied light scalar top quark and supersymmetric dark matter She looked at collisional damping, which considers the impact of dark matter and standard model particles with the cosmic microwave background.
Career and research
In 2001 Bœhm joined Joseph Silk at the University of Oxford. Here she worked on light dark matter particles which couple to light Z′ bosons. She proposed new candidates for scalar dark matter, in the form of heavy fermions or light gauge bosons. When the SPI spectrometer onboard INTEGRAL identified a 511 keV line in the Galactic Center, Bœhm predicted that this could have been the signature of dark matter. She has continued to search for new signatures of dark matter, including examining the GeV excess in the Fermi Gamma-ray Space Telescope data. In 2004 Bœhm joined the Laboratoire d'Annecy-le-Vieux de Physique Théorique, where she was promoted to senior lecturer in 2008. She was awarded the Centre national de la recherche scientifique Bronze Medal.
She looked at the analysis of the CoGeNT direct detection method, and found that it could have suffered from a large background. In 2015 Boehm was nominated as Fellow of the Institute of Physics. She is the Principal investigator of the Theia mission, a space observatory which will allow Bœhm and her team to test the dark matter predictions that arise due to the Lambda-CDM model.
Boehm was made an Emmy Noether F
|
https://en.wikipedia.org/wiki/Trajectory%20inference
|
Trajectory inference or pseudotemporal ordering is a computational technique used in single-cell transcriptomics to determine the pattern of a dynamic process experienced by cells and then arrange cells based on their progression through the process. Single-cell protocols have much higher levels of noise than bulk RNA-seq, so a common step in a single-cell transcriptomics workflow is the clustering of cells into subgroups. Clustering can contend with this inherent variation by combining the signal from many cells, while allowing for the identification of cell types. However, some differences in gene expression between cells are the result of dynamic processes such as the cell cycle, cell differentiation, or response to an external stimuli. Trajectory inference seeks to characterize such differences by placing cells along a continuous path that represents the evolution of the process rather than dividing cells into discrete clusters. In some methods this is done by projecting cells onto an axis called pseudotime which represents the progression through the process.
Methods
Since 2015, more than 50 algorithms for trajectory inference have been created. Although the approaches taken are diverse there are some commonalities to the methods. Typically, the steps in the algorithm consist of dimensionality reduction to reduce the complexity of the data, trajectory building to determine the structure of the dynamic process, and projection of the data onto the trajectory so that cells are positioned by their evolution through the process and cells with similar expression profiles are situated near each other. Trajectory inference algorithms differ in the specific procedure used for dimensionality reduction, the kinds of structures that can be used to represent the dynamic process, and the prior information that is required or can be provided.
Dimensionality reduction
The data produced by single-cell RNA-seq can consist of thousands of cells each with expression levels r
|
https://en.wikipedia.org/wiki/Steak%20sauce
|
Steak sauce is a tangy sauce commonly served as a condiment for beef in the United States. Two of its major producers are British companies, and the sauce is similar to the "brown sauce" of British cuisine.
Overview
Steak sauce is normally brown in color, and often made from tomatoes, spices, vinegar, and raisins, and sometimes anchovies. The taste is either tart or sweet, with a peppery taste similar to Worcestershire sauce. Three major brands in the U.S. are the British Lea & Perrins, the United States Heinz 57, and the British Henderson's A1 Sauce once sold in the United States as "A1 Steak Sauce" before being renamed "A.1. Sauce". There are also numerous regional brands that feature a variety of flavor profiles. Several smaller companies and specialty producers manufacture steak sauce, as well, and most major grocery store chains offer private-label brands. These sauces typically mimic the slightly sweet flavor of A1 or Lea & Perrins.
Heinz 57 steak sauce, produced by H. J. Heinz Company, is unlike other steak sauces in that it has a distinctive dark orange-yellow color and tastes more like ketchup spiced with mustard seed. Heinz once advertised the product as tasting "like ketchup with a kick".
See also
Béarnaise sauce
Café de Paris sauce
Compound butter
Demi-glace
Henderson's Relish
List of sauces
Montreal steak seasoning
Peppercorn sauce
|
https://en.wikipedia.org/wiki/BMS-470539
|
BMS-470539 is a small-molecule experimental drug which acts as a potent and highly selective full agonist of the MC1 receptor. It was discovered in 2003 as part of an effort to understand the role of the MC1 receptor in immunomodulation, and has since been used in scientific research to determine its role in inflammatory processes. The compound was designed with the intention of mimicking the central His-Phe-Arg-Trp pharmacophore of the melanocortins, and this proved to be successful based on its favorable pharmacodynamic profile.
See also
Afamelanotide
Bremelanotide
Melanotan II
Modimelanotide
Setmelanotide
|
https://en.wikipedia.org/wiki/Gap%20dynamics
|
Gap dynamics refers to the pattern of plant growth that occurs following the creation of a forest gap, a local area of natural disturbance that results in an opening in the canopy of a forest. Gap dynamics are a typical characteristic of both temperate and tropical forests and have a wide variety of causes and effects on forest life.
Gaps are the result of natural disturbances in forests, ranging from a large branch breaking off and dropping from a tree, to a tree dying then falling over, bringing its roots to the surface of the ground, to landslides bringing down large groups of trees. Because of the range of causes, gaps, therefore, have a wide range of sizes, including small and large gaps. Regardless of size, gaps allow an increase in light as well as changes in moisture and wind levels, leading to differences in microclimate conditions compared to those from below the closed canopy, which are generally cooler and more shaded.
For gap dynamics to occur in naturally disturbed areas, either primary or secondary succession must occur. Ecological secondary succession is much more common and pertains to the process of vegetation replacement after a natural disturbance. Secondary succession results in second-growth or secondary forest, which currently covers more of the tropics than old-growth forest.
Since gaps let in more light and create diverse microclimates, they provide the ideal location and conditions for rapid plant reproduction and growth. In fact, most plant species in the tropics are dependent, at least in part, on gaps to complete their life cycles.
Disturbances
Gap dynamics are the result of disturbances within an ecosystem. There are both large scale and small scale disturbances, and both are influenced by duration and frequency. These all affect the resulting impact and regeneration patterns of the ecosystem.
The most common type of disturbance within a tropical ecosystem is fire. Since most nutrients in a tropical ecosystem are contained in the
|
https://en.wikipedia.org/wiki/Weiss%20magneton
|
The Weiss magneton was an experimentally derived unit of magnetic moment equal to joules per tesla, which is about 20% of the Bohr magneton. It was suggested in 1911 by Pierre Weiss.
Origin
The idea of elementary magnets originated from the Swiss physicist Walther Ritz, who tried to explain atomic spectra. In 1907 he suggested that atoms might contain chains of magnetized and neutral rods, which were the cause of magnetic properties of materials. Just like elementary charges, this was supposed to give rise to discrete values of the total magnetic moment per atom. In 1909, Weiss performed measurements of the saturation magnetization at the temperature of liquid hydrogen in the laboratory of Heike Kamerlingh Onnes in Leiden. In 1911, Weiss announced that the molar moments of nickel and iron had the ratio of 3:11, from which he derived the value of a magneton.
Comparisons with early quantum theory
Weiss gave an address about the magneton at a conference in Karlsruhe in September 1911. Several theorists commented that the magneton should involve Planck's constant h. By postulating that the ratio of electron kinetic energy to orbital frequency should be equal to h, Richard Gans computed a value that was almost an order of magnitude larger than the value obtained by Weiss. At the First Solvay Conference in November that year, Paul Langevin obtained a submultiple which gave better agreement. But once the old quantum theory was a bit better understood, no theoretical argument could be found to justify Weiss's value. In 1920, Wolfgang Pauli wrote an article where he called the magneton of the experimentalists the Weiss magneton, and the theoretical value the Bohr magneton.
Further experiments
Despite theoretical problems, Weiss and other experimentalists like Blas Cabrera continued to analyze data in terms of the Weiss magneton until the 1930s.
|
https://en.wikipedia.org/wiki/Remarriage
|
Remarriage is a marriage that takes place after a previous marital union has ended, as through divorce or widowhood.
Some individuals are more likely to remarry than others; the likelihood can differ based on previous relationship status (e.g. divorced vs. widowed), level of interest in establishing a new romantic relationship, gender, culture, and age among other factors. Those who choose not to remarry may prefer alternative arrangements like cohabitation or living apart together.
Remarriage also provides mental and physical health benefits. However, although remarried individuals tend to have better health than individuals who do not repartner, they still generally have worse health than individuals who have remained continuously married. Remarriage is addressed differently in various religions and denominations of those religions. Someone who repeatedly remarries is referred to as a serial wedder.
Remarriage following divorce or separation
As of 1995, depending on individual and contextual factors, up to 50% of couples in the USA ended their first marriage in divorce or permanent separation (i.e. the couple is not officially divorced but they no longer live together or share assets). Couples typically end their marriage because they are unhappy during the partnership; however, while these couples give up hope for their partner, this does not mean they give up on the institution of marriage. The majority of people who have divorced (close to 80%) go on to marry again. On average, they remarry just under 4 years after divorcing; younger adults tend to remarry more quickly than older adults. For women, just over half remarry in less than 5 years, and by 10 years after a divorce 75% have remarried.
People may be eager to remarry because they do not see themselves as responsible for the previous marriage ending. Generally, they are more likely to believe their partner's behaviors caused the divorce, and minimize the influence of their own actions. Therefore, they r
|
https://en.wikipedia.org/wiki/ChIA-PET
|
Chromatin Interaction Analysis by Paired-End Tag Sequencing (ChIA-PET or ChIA-PETS) is a technique that incorporates chromatin immunoprecipitation (ChIP)-based enrichment, chromatin proximity ligation, Paired-End Tags, and High-throughput sequencing to determine de novo long-range chromatin interactions genome-wide.
Genes can be regulated by regions far from the promoter such as regulatory elements, insulators and boundary elements, and transcription-factor binding sites (TFBS). Uncovering the interplay between regulatory regions and gene coding regions is essential for understanding the mechanisms governing gene regulation in health and disease (Maston et al., 2006). ChIA-PET can be used to identify unique, functional chromatin interactions between distal and proximal regulatory transcription-factor binding sites and the promoters of the genes they interact with.
ChIA-PET can also be used to unravel the mechanisms of genome control during processes such as cell differentiation, proliferation, and development. By creating ChIA-PET interactome maps for DNA-binding regulatory proteins and promoter regions, we can better identify unique targets for therapeutic intervention (Fullwood & Yijun, 2009).
Methodology
The ChIA-PET method combines ChIP-based methods, and Chromosome conformation capture (3C) based methods, to extend the capabilities of both approaches. ChIP-Sequencing (ChIP-Seq) is a popular method used to identify TFBS while 3C has been used to identify long-range chromatin interactions. Independently, both suffer from limitations in identifying de-novo long-range interactions genome wide. While ChIP-Seq is able to identify TFBS genome-wide, it provides only linear information of protein binding sites along the chromosomes (but not interactions between them), and can suffer from high genomic background noise (false positives). While 3C is capable of analyzing non-linear, long-range chromatin interactions, it cannot be used genome wide and, like ChIP-Seq, als
|
https://en.wikipedia.org/wiki/Martin%20curve
|
The Martin curve is a power law used by oceanographers to describe the export to the ocean floor of particulate organic carbon (POC). The curve is controlled with two parameters: the reference depth in the water column, and a remineralisation parameter which is a measure of the rate at which the vertical flux of POC attenuates. It is named after the American oceanographer John Martin.
The Martin Curve has been used in the study of ocean carbon cycling and has contributed to understanding the role of the ocean in regulating atmospheric levels.
Background
The dynamics of the particulate organic carbon (POC) pool in the ocean are central to the marine carbon cycle. POC is the link between surface primary production, the deep ocean, and marine sediments. The rate at which POC is degraded in the dark ocean can impact atmospheric CO2 concentration.
The biological carbon pump (BCP) is a crucial mechanism by which atmospheric CO2 is taken up by the ocean and transported to the ocean interior. Without the BCP, the pre-industrial atmospheric CO2 concentration (~280 ppm) would have risen to ~460 ppm. At present, the particulate organic carbon (POC) flux from the surface layer of the ocean to the ocean interior has been estimated to be 4–13 Pg-C year−1. To evaluate the efficiency of the BCP, it is necessary to quantify the vertical attenuation of the POC flux with depth because the deeper that POC is transported, the longer the CO2 will be isolated from the atmosphere. Thus, an increase in the efficiency of the BCP has the potential to cause an increase of ocean carbon sequestration of atmospheric CO2 that would result in a negative feedback on global warming. Different researchers have investigated the vertical attenuation of the POC flux since the 1980s.
In 1987, Martin et al. proposed the following power law function to describe the POC flux attenuation:
(1)
where z is water depth (m), and Fz and F100 are the POC fluxes at depths of z metres and 100 metres respect
|
https://en.wikipedia.org/wiki/Coelenteramine
|
Coelenteramine is a metabolic product of the bioluminescent reactions in organisms that utilize coelenterazine. It was first isolated from Aequorea victoria along with coelenteramide after coelenterates were stimulated to emit light.
|
https://en.wikipedia.org/wiki/Herd
|
A herd is a social group of certain animals of the same species, either wild or domestic. The form of collective animal behavior associated with this is called herding. These animals are known as gregarious animals.
The term herd is generally applied to mammals, and most particularly to the grazing ungulates that classically display this behaviour. Different terms are used for similar groupings in other species; in the case of birds, for example, the word is flocking, but flock may also be used for mammals, particularly sheep or goats. Large groups of carnivores are usually called packs, and in nature a herd is classically subject to predation from pack hunters.
Special collective nouns may be used for particular taxa (for example a flock of geese, if not in flight, is sometimes called a gaggle) but for theoretical discussions of behavioural ecology, the generic term herd can be used for all such kinds of assemblage.
The word herd, as a noun, can also refer to one who controls, possesses and has care for such groups of animals when they are domesticated. Examples of herds in this sense include shepherds (who tend to sheep), goatherds (who tend to goats), and cowherds (who tend to cattle).
The structure and size of herds
When an association of animals (or, by extension, people) is described as a herd, the implication is that the group tends to act together (for example, all moving in the same direction at a given time), but that this does not occur as a result of planning or coordination. Rather, each individual is choosing behaviour in correspondence with most other members, possibly through imitation or possibly because all are responding to the same external circumstances. A herd can be contrasted with a coordinated group where individuals have distinct roles. Many human groupings, such as army detachments or sports teams, show such coordination and differentiation of roles, but so do some animal groupings such as those of eusocial insects, which are coordina
|
https://en.wikipedia.org/wiki/Richardson%27s%20theorem
|
In mathematics, Richardson's theorem establishes the undecidability of the equality of real numbers defined by expressions involving integers, , and exponential and sine functions. It was proved in 1968 by mathematician and computer scientist Daniel Richardson of the University of Bath.
Specifically, the class of expressions for which the theorem holds is that generated by rational numbers, the number π, the number ln 2, the variable x, the operations of addition, subtraction, multiplication, composition, and the sin, exp, and abs functions.
For some classes of expressions (generated by other primitives than in Richardson's theorem) there exist algorithms that can determine whether an expression is zero.
Statement of the theorem
Richardson's theorem can be stated as follows:
Let E be a set of expressions that represent functions. Suppose that E includes these expressions:
x (representing the identity function)
ex (representing the exponential functions)
sin x (representing the sin function)
all rational numbers, ln 2, and π (representing constant functions that ignore their input and produce the given number as output)
Suppose E is also closed under a few standard operations. Specifically, suppose that if A and B are in E, then all of the following are also in E:
A + B (representing the pointwise addition of the functions that A and B represent)
A − B (representing pointwise subtraction)
AB (representing pointwise multiplication)
A∘B (representing the composition of the functions represented by A and B)
Then the following decision problems are unsolvable:
Deciding whether an expression A in E represents a function that is nonnegative everywhere
If E includes also the expression |x| (representing the absolute value function), deciding whether an expression A in E represents a function that is zero everywhere
If E includes an expression B representing a function whose antiderivative has no representative in E, deciding whether an expression A in E re
|
https://en.wikipedia.org/wiki/Tetrad%20formalism
|
The tetrad formalism is an approach to general relativity that generalizes the choice of basis for the tangent bundle from a coordinate basis to the less restrictive choice of a local basis, i.e. a locally defined set of four linearly independent vector fields called a tetrad or vierbein. It is a special case of the more general idea of a vielbein formalism, which is set in (pseudo-)Riemannian geometry. This article as currently written makes frequent mention of general relativity; however, almost everything it says is equally applicable to (pseudo-)Riemannian manifolds in general, and even to spin manifolds. Most statements hold simply by substituting arbitrary for . In German, "" translates to "four", and "" to "many".
The general idea is to write the metric tensor as the product of two vielbeins, one on the left, and one on the right. The effect of the vielbeins is to change the coordinate system used on the tangent manifold to one that is simpler or more suitable for calculations. It is frequently the case that the vielbein coordinate system is orthonormal, as that is generally the easiest to use. Most tensors become simple or even trivial in this coordinate system; thus the complexity of most expressions is revealed to be an artifact of the choice of coordinates, rather than a innate property or physical effect. That is, as a formalism, it does not alter predictions; it is rather a calculational technique.
The advantage of the tetrad formalism over the standard coordinate-based approach to general relativity lies in the ability to choose the tetrad basis to reflect important physical aspects of the spacetime. The abstract index notation denotes tensors as if they were represented by their coefficients with respect to a fixed local tetrad. Compared to a completely coordinate free notation, which is often conceptually clearer, it allows an easy and computationally explicit way to denote contractions.
The significance of the tetradic formalism appear in the E
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.