source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Anfinsen%27s%20dogma | Anfinsen's dogma, also known as the thermodynamic hypothesis, is a postulate in molecular biology. It states that, at least for a small globular protein in its standard physiological environment, the native structure is determined only by the protein's amino acid sequence. The dogma was championed by the Nobel Prize Laureate Christian B. Anfinsen from his research on the folding of ribonuclease A. The postulate amounts to saying that, at the environmental conditions (temperature, solvent concentration and composition, etc.) at which folding occurs, the native structure is a unique, stable and kinetically accessible minimum of the free energy. In other words, there are three conditions for formation of a unique protein structure:
Uniqueness – Requires that the sequence does not have any other configuration with a comparable free energy. Hence the free energy minimum must be unchallenged.
Stability – Small changes in the surrounding environment cannot give rise to changes in the minimum configuration. This can be pictured as a free energy surface that looks more like a funnel (with the native state in the bottom of it) rather than like a soup plate (with several closely related low-energy states); the free energy surface around the native state must be rather steep and high, in order to provide stability.
Kinetical accessibility – Means that the path in the free energy surface from the unfolded to the folded state must be reasonably smooth or, in other words, that the folding of the chain must not involve highly complex changes in the shape (like knots or other high order conformations). Basic changes in the shape of the protein happen dependent on their environment, shifting shape to suit their place. This creates multiple configurations for biomolecules to shift into.
Challenges to Anfinsen's dogma
Protein folding in a cell is a highly complex process that involves transport of the newly synthesized proteins to appropriate cellular compartments through targetin |
https://en.wikipedia.org/wiki/William%20B.%20Provine | William Ball Provine (February 19, 1942 – September 1, 2015) was an American historian of science and of evolutionary biology and population genetics. He was the Andrew H. and James S. Tisch Distinguished University Professor at Cornell University and was a professor in the Departments of History, Science and Technology Studies, and Ecology and Evolutionary Biology.
Biography
Provine was born in Tennessee. He held a B.S. in mathematics (1962), and an M.A. (1965) and Ph.D. (1970) in History of Science from the University of Chicago. He joined the Cornell faculty in 1969. He suffered seizures in 1995 due to a brain tumour. Provine died on September 1, 2015, due to complications from the tumor.
History of theoretical population genetics
Provine's Ph.D. thesis, later published as a book, documented the early origins of theoretical population genetics in the conflicts between the biostatistics and Mendelian schools of thought. He documented later developments in theoretical population genetics in his biography of Sewall Wright, who was still alive and available for interviews. In this book, Provine criticizes Wright for confounding three different concepts of adaptive landscape: genotype to fitness landscapes, allele frequency to fitness landscapes, and phenotype to fitness landscapes. Provine later grew critical of Wright's views on genetic drift, instead attributing observed effects to the consequences of inbreeding and consequent selection at linked sites. John H. Gillespie credits Provine with stimulating his interest in the topic of hitchhiking or "genetic draft" as an alternative to genetic drift. Provine later published his critique of genetic drift in a book. Provine defended the importance of mathematics' contribution to the modern evolutionary synthesis.
Education reform
In 1970, Provine was instrumental in the founding of Cornell's Risley Residential College. He was the first faculty member in residence.
Philosophy
Provine was a philosopher, atheist, an |
https://en.wikipedia.org/wiki/Covariance%20matrix | In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of a given random vector.
Intuitively, the covariance matrix generalizes the notion of variance to multiple dimensions. As an example, the variation in a collection of random points in two-dimensional space cannot be characterized fully by a single number, nor would the variances in the and directions contain all of the necessary information; a matrix would be necessary to fully characterize the two-dimensional variation.
Any covariance matrix is symmetric and positive semi-definite and its main diagonal contains variances (i.e., the covariance of each element with itself).
The covariance matrix of a random vector is typically denoted by , or .
Definition
Throughout this article, boldfaced unsubscripted and are used to refer to random vectors, and Roman subscripted and are used to refer to scalar random variables.
If the entries in the column vector
are random variables, each with finite variance and expected value, then the covariance matrix is the matrix whose entry is the covariance
where the operator denotes the expected value (mean) of its argument.
Conflicting nomenclatures and notations
Nomenclatures differ. Some statisticians, following the probabilist William Feller in his two-volume book An Introduction to Probability Theory and Its Applications, call the matrix the variance of the random vector , because it is the natural generalization to higher dimensions of the 1-dimensional variance. Others call it the covariance matrix, because it is the matrix of covariances between the scalar components of the vector .
Both forms are quite standard, and there is no ambiguity between them. The matrix is also often called the variance-covariance matrix, since the diagonal terms are in fact variances.
By comparis |
https://en.wikipedia.org/wiki/Demagnetizing%20field | The demagnetizing field, also called the stray field (outside the magnet), is the magnetic field (H-field) generated by the magnetization in a magnet. The total magnetic field in a region containing magnets is the sum of the demagnetizing fields of the magnets and the magnetic field due to any free currents or displacement currents. The term demagnetizing field reflects its tendency to act on the magnetization so as to reduce the total magnetic moment. It gives rise to shape anisotropy in ferromagnets with a single magnetic domain and to magnetic domains in larger ferromagnets.
The demagnetizing field of an arbitrarily shaped object requires a numerical solution of Poisson's equation even for the simple case of uniform magnetization. For the special case of ellipsoids (including infinite cylinders) the demagnetization field is linearly related to the magnetization by a geometry dependent constant called the demagnetizing factor. Since the magnetization of a sample at a given location depends on the total magnetic field at that point, the demagnetization factor must be used in order to accurately determine how a magnetic material responds to a magnetic field. (See magnetic hysteresis.)
Magnetostatic principles
Maxwell's equations
In general the demagnetizing field is a function of position . It is derived from the magnetostatic equations for a body with no electric currents. These are Ampère's law
and Gauss's law
The magnetic field and flux density are related by
where is the permeability of vacuum and is the magnetisation.
The magnetic potential
The general solution of the first equation can be expressed as the gradient of a scalar potential :
Inside the magnetic body, the potential is determined by substituting () and () in ():
Outside the body, where the magnetization is zero,
At the surface of the magnet, there are two continuity requirements:
The component of parallel to the surface must be continuous (no jump in value at the surface).
The compo |
https://en.wikipedia.org/wiki/Ramanujan%20summation | Ramanujan summation is a technique invented by the mathematician Srinivasa Ramanujan for assigning a value to divergent infinite series. Although the Ramanujan summation of a divergent series is not a sum in the traditional sense, it has properties that make it mathematically useful in the study of divergent infinite series, for which conventional summation is undefined.
Summation
Since there are no properties of an entire sum, the Ramanujan summation functions as a property of partial sums. If we take the Euler–Maclaurin summation formula together with the correction rule using Bernoulli numbers, we see that:
Ramanujan wrote it for the case p going to infinity, and changing the limits of the integral and the corresponding summation:
where C is a constant specific to the series and its analytic continuation and the limits on the integral were not specified by Ramanujan, but presumably they were as given above. Comparing both formulae and assuming that R tends to 0 as x tends to infinity, we see that, in a general case, for functions f(x) with no divergence at x = 0:
where Ramanujan assumed By taking we normally recover the usual summation for convergent series. For functions f(x) with no divergence at x = 1, we obtain:
C(0) was then proposed to use as the sum of the divergent sequence. It is like a bridge between summation and integration.
The most common application of Ramanujan summation is for the Riemann zeta function , in which the Ramanujan summation of the function has the same value as for all the values of , even for those for which the first function is divergent, which is equivalent to doing analytic continuation or, alternatively, applying smoothed sums.
The convergent version of summation for functions with appropriate growth condition is then:
Ramanujan summation of divergent series
In the following text, indicates "Ramanujan summation". This formula originally appeared in one of Ramanujan's notebooks, without any notation to indic |
https://en.wikipedia.org/wiki/Sparkie%20Williams | Sparkie Williams (1954–1962) was a talking budgie who had a repertoire of more than 500 words and eight nursery rhymes, becoming a national celebrity after fronting an advertising campaign for Capern's bird seed, and making a record which sold 20,000 copies. After he died, he was stuffed and put on show at Newcastle's Hancock Museum. Sparkie provided the inspiration for an opera by Michael Nyman and Carsten Nicolai. The opera was performed in Berlin in March 2009.
History
Hatched and bred in North East England, Sparkie was owned by Mrs. Mattie Williams, who lived in Forest Hall, near Newcastle-upon-Tyne. He earned his name after Mrs Williams called him "A bright little spark", and she taught him to speak, recite songs and sing nursery rhymes. Sparkie had a huge repertoire of words and sayings. By the time he was three-and-a-half, he had won the BBC International Cage Word Contest in July 1958. He was so good, in fact, that he was disqualified from taking part again.
Sparkie was courted by bird seed sellers and fronted the advertisement campaign for Capern's bird seed for two years. He was recorded talking with budgie expert Philip Marsden on BBC radio, and appeared on the BBC Tonight programme with Cliff Michelmore. When Sparkie died on Tuesday 4 December 1962, Mattie Williams had him stuffed and mounted on a wooden perch at the renowned taxidermy establishment, Rowland Ward Ltd. of Piccadilly, London. He was then taken on a tour of Britain in an exhibition of his life and work, before coming back to the Hancock Museum in 1996. Sparkie Williams is acclaimed as the world's most outstanding talking bird in the Guinness Book of Records.
Sparkie and his archive now form part of the collections owned by the Natural History Society of Northumbria.
Opera
The opera inspired by Sparkie is based on Michael Nyman's 1977 piece Pretty Talk. The original piece used material from a record made by Capern's bird food company to help customers teach their pet birds to talk. The 7 |
https://en.wikipedia.org/wiki/Harmonic%20mixing | Harmonic mixing or key mixing (also referred to as mixing in key) is a DJ's continuous mix between two pre-recorded tracks that are most often either in the same key, or their keys are relative or in a subdominant or dominant relationship with one another.
The primary goal of harmonic mixing is to create a smooth transition between songs. Songs in the same key do not generate a dissonant tone when mixed. This technique enables DJs to create a harmonious and consonant mashup with any music genre.
The Camelot wheel can be used for harmonic mixing. It is based on the circle of fifths.
Traditional methods
A commonly known method of using harmonic mixing is to detect the key signature of every music file in the DJ collection by using a piano. The key signature can be used to create harmonic mash-ups with other tracks in the same key. Also considered compatible with the key signature in question are its related subdominant and dominant keys, as well as its relative major (or minor, as the case may be) key.
See also
Beatmatching
Segue in music |
https://en.wikipedia.org/wiki/Correlation%20coefficient | A correlation coefficient is a numerical measure of some type of correlation, meaning a statistical relationship between two variables. The variables may be two columns of a given data set of observations, often called a sample, or two components of a multivariate random variable with a known distribution.
Several types of correlation coefficient exist, each with their own definition and own range of usability and characteristics. They all assume values in the range from −1 to +1, where ±1 indicates the strongest possible agreement and 0 the strongest possible disagreement. As tools of analysis, correlation coefficients present certain problems, including the propensity of some types to be distorted by outliers and the possibility of incorrectly being used to infer a causal relationship between the variables (for more, see Correlation does not imply causation).
Types
There are several different measures for the degree of correlation in data, depending on the kind of data: principally whether the data is a measurement, ordinal, or categorical.
Pearson
The Pearson product-moment correlation coefficient, also known as , , or Pearson's , is a measure of the strength and direction of the linear relationship between two variables that is defined as the covariance of the variables divided by the product of their standard deviations. This is the best-known and most commonly used type of correlation coefficient. When the term "correlation coefficient" is used without further qualification, it usually refers to the Pearson product-moment correlation coefficient.
Intra-class
Intraclass correlation (ICC) is a descriptive statistic that can be used, when quantitative measurements are made on units that are organized into groups; it describes how strongly units in the same group resemble each other.
Rank
Rank correlation is a measure of the relationship between the rankings of two variables, or two rankings of the same variable:
Spearman's rank correlation coefficient is |
https://en.wikipedia.org/wiki/Physics%20First | Physics First is an educational program in the United States, that teaches a basic physics course in the ninth grade (usually 14-year-olds), rather than the biology course which is more standard in public schools. This course relies on the limited math skills that the students have from pre-algebra and algebra I. With these skills students study a broad subset of the introductory physics canon with an emphasis on topics which can be experienced kinesthetically or without deep mathematical reasoning. Furthermore, teaching physics first is better suited for English Language Learners, who would be overwhelmed by the substantial vocabulary requirements of Biology.
Physics First began as an organized movement among educators around 1990, and has been slowly catching on throughout the United States. The most prominent movement championing Physics First is Leon Lederman's ARISE (American Renaissance in Science Education).
Many proponents of Physics First argue that turning this order around lays the foundations for better understanding of chemistry, which in turn will lead to more comprehension of biology. Due to the tangible nature of most introductory physics experiments, Physics First also lends itself well to an introduction to inquiry-based science education, where students are encouraged to probe the workings of the world in which they live.
The majority of high schools which have implemented "physics first" do so by way of offering two separate classes, at two separate levels: simple physics concepts in 9th grade, followed by more advanced physics courses in 11th or 12th grade. In schools with this curriculum, nearly all 9th grade students take a "Physical Science", or "Introduction to Physics Concepts" course. These courses focus on concepts that can be studied with skills from pre-algebra and algebra I. With these ideas in place, students then can be exposed to ideas with more physics related content in chemistry, and other science electives. After th |
https://en.wikipedia.org/wiki/Thomas%E2%80%93Fermi%20screening | Thomas–Fermi screening is a theoretical approach to calculate the effects of electric field screening by electrons in a solid. It is a special case of the more general Lindhard theory; in particular, Thomas–Fermi screening is the limit of the Lindhard formula when the wavevector (the reciprocal of the length-scale of interest) is much smaller than the Fermi wavevector, i.e. the long-distance limit. It is named after Llewellyn Thomas and Enrico Fermi.
The Thomas–Fermi wavevector (in Gaussian-cgs units) is
where μ is the chemical potential (Fermi level), n is the electron concentration and e is the elementary charge.
Under many circumstances, including semiconductors that are not too heavily doped, , where kB is Boltzmann constant and T is temperature. In this case,
i.e. is given by the familiar formula for Debye length. In the opposite extreme, in the low-temperature limit ,
electrons behave as quantum particles (fermions). Such an approximation is valid for metals at room temperature, and the Thomas–Fermi screening wavevector kTF given in atomic units is
If we restore the electron mass and the Planck constant , the screening wavevector in Gaussian units is .
For more details and discussion, including the one-dimensional and two-dimensional cases, see the article on Lindhard theory.
Derivation
Relation between electron density and internal chemical potential
The internal chemical potential (closely related to Fermi level, see below) of a system of electrons describes how much energy is required to put an extra electron into the system, neglecting electrical potential energy. As the number of electrons in the system increases (with fixed temperature and volume), the internal chemical potential increases. This consequence is largely because electrons satisfy the Pauli exclusion principle: only one electron may occupy an energy level and lower-energy electron states are already full, so the new electrons must occupy higher and higher energy states.
Given a |
https://en.wikipedia.org/wiki/Edgeworth%20paradox | To solve the Bertrand paradox, the Irish economist Francis Ysidro Edgeworth put forward the Edgeworth Paradox in his paper "The Pure Theory of Monopoly", published in 1897.
In economics, the Edgeworth paradox describes a situation in which two players cannot reach a state of equilibrium with pure strategies, i.e. each charging a stable price. A fact of the Edgeworth Paradox is that in some cases, even if the direct price impact is negative and exceeds the conditions, an increase in cost proportional to the quantity of an item provided may cause a decrease in all optimal prices. Due to the limited production capacity of enterprises in reality, if only one enterprise's total production capacity can be supplied cannot meet social demand, another enterprise can charge a price that exceeds the marginal cost for the residual social need.
Example
Suppose two companies, A and B, sell an identical commodity product, and that customers choose the product solely on the basis of price. Each company faces capacity constraints, in that on its own it cannot satisfy demand at its zero-profit price, but together they can more than satisfy such demand.
The Edgeworth Pardox assumption of the Cournot model is as follows:
1. The production capacity of the two manufacturers is limited. Under a certain price level, the output of a particular Oligopoly cannot meet the market demand at this price level so that another manufacturer can obtain the residual market demand.
2. In a certain period, two prices can exist in the market at the same time.
3. When a particular oligopoly chooses a certain price level, another oligopoly will not immediately respond to the price.
Edgeworth model
Edgeworth's model follows Bertrand's hypothesis, where each seller assumes that the price of its competitor, not its output, remains constant. Suppose there are two sellers, A and B, facing the same demand curve in the market. To explain Edgeworth's model, let us first assume that A is the only seller in |
https://en.wikipedia.org/wiki/Mixing%20length%20model | In fluid dynamics, the mixing length model is a method attempting to describe momentum transfer by turbulence Reynolds stresses within a Newtonian fluid boundary layer by means of an eddy viscosity. The model was developed by Ludwig Prandtl in the early 20th century. Prandtl himself had reservations about the model, describing it as, "only a rough approximation,"
but it has been used in numerous fields ever since, including atmospheric science, oceanography and stellar structure.
Physical intuition
The mixing length is conceptually analogous to the concept of mean free path in thermodynamics: a fluid parcel will conserve its properties for a characteristic length, , before mixing with the surrounding fluid. Prandtl described that the mixing length,
In the figure above, temperature, , is conserved for a certain distance as a parcel moves across a temperature gradient. The fluctuation in temperature that the parcel experienced throughout the process is . So can be seen as the temperature deviation from its surrounding environment after it has moved over this mixing length .
Mathematical formulation
To begin, we must first be able to express quantities as the sums of their slowly varying components and fluctuating components.
Reynolds decomposition
This process is known as Reynolds decomposition. Temperature can be expressed as:
where , is the slowly varying component and is the fluctuating component.
In the above picture, can be expressed in terms of the mixing length:
The fluctuating components of velocity, , , and , can also be expressed in a similar fashion:
although the theoretical justification for doing so is weaker, as the pressure gradient force can significantly alter the fluctuating components. Moreover, for the case of vertical velocity, must be in a neutrally stratified fluid.
Taking the product of horizontal and vertical fluctuations gives us:
The eddy viscosity is defined from the equation above as:
so we have the eddy viscosity, expres |
https://en.wikipedia.org/wiki/Paleopedological%20record | The paleopedological record is, essentially, the fossil record of soils. The paleopedological record consists chiefly of paleosols buried by flood sediments, or preserved at geological unconformities, especially plateau escarpments or sides of river valleys. Other fossil soils occur in areas where volcanic activity has covered the ancient soils.
Problems of recognition
After burial, soil fossils tend to be altered by various chemical and physical processes. These include:
Decomposition of organic matter that was once present in the old soil. This hinders the recognition of vegetation that was in the soil when it was present.
Oxidation of iron from Fe2+ to Fe3+ by O2 as the former soil becomes dry and more oxygen enters the soil.
Drying out of hydrous ferric oxides to anhydrous oxides - again due to the presence of more available O2 in the dry environment.
The keys to recognising fossils of various soils include:
Tubular structures that branch and thin irregularly downward or show the anatomy of fossilised root traces
Gradational alteration down from a sharp lithological contact like that between land surface and soil horizons
Complex patterns of cracks and mineral replacements like those of soil clods (peds) and planar cutans.
Classification
Soil fossils are usually classified by USDA soil taxonomy. With the exception of some exceedingly old soils which have a clayey, grey-green horizon that is quite unlike any present soil and clearly formed in the absence of O2, most fossil soils can be classified into one of the twelve orders recognised by this system. This is usually done by means of X-ray diffraction, which allows the various particles within the former soils to be analysed so that it can be seen to which order the soils correspond.
Other methods for classifying soil fossils rely on geochemical analysis of the soil material, which allows the minerals in the soil to be identified. This is only useful where large amounts of the ancient soil are avai |
https://en.wikipedia.org/wiki/Variable%20structure%20control | Variable structure control (VSC) is a form of discontinuous nonlinear control. The method alters the dynamics of a nonlinear system by application of a high-frequency switching control. The state-feedback control law is not a continuous function of time; it switches from one smooth condition to another. So the structure of the control law varies based on the position of the state trajectory; the method switches from one smooth control law to another and possibly very fast speeds (e.g., for a countably infinite number of times in a finite time interval). VSC and associated sliding mode behaviour was first investigated in early 1950s in the Soviet Union by Emelyanov and several coresearchers.
The main mode of VSC operation is sliding mode control (SMC). The strengths of SMC include:
Low sensitivity to plant parameter uncertainty
Greatly reduced-order modeling of plant dynamics
Finite-time convergence (due to discontinuous control law)
The weaknesses of SMC include:
Chattering due to implementation imperfections
Over-focus on matched uncertainties (i.e., uncertainties that enter into the control channel)
However, the evolution of VSC is an active area of research.
See also
Variable structure system
Sliding mode control
Hybrid system
Nonlinear control
Robust control
Optimal control
H-bridge – A topology that combines four switches forming the four legs of an "H". Can be used to drive a motor (or other electrical device) forward or backward when only a single supply is available. Often used in actuator in sliding-mode controlled systems.
Switching amplifier – Uses switching-mode control to drive continuous outputs
Delta-sigma modulation – Another (feedback) method of encoding a continuous range of values in a signal that rapidly switches between two states (i.e., a kind of specialized sliding-mode control)
Pulse-density modulation – A generalized form of delta-sigma modulation.
Pulse-width modulation – Another modulation scheme that produces continuous motion thr |
https://en.wikipedia.org/wiki/Clinical%20quality%20management%20system | Clinical quality management systems (CQMS) are systems used in the life sciences sector (primarily in the pharmaceutical, biologics and medical device industries) designed to manage quality management best practices throughout clinical research and clinical study management. A CQMS system is designed to manage all of the documents, activities, tasks, processes, quality events, relationships, audits and training that must be administered and controlled throughout the life of a clinical trial. The premise of a CQMS is to bring together the activities led by two sectors of clinical research, Clinical Quality and Clinical Operations, to facilitate cross-functional activities to improve efficiencies and transparency and to encourage the use of risk mitigation and risk management practices at the clinical study level.
Based on the principles of quality management systems (QMS) which are used in many industries to create a framework for defining and delivering quality outcomes, managing risk, and continual improvement. Many guidelines and governance bodies have been established to ensure a common approach within a given industry to a set of parameters used to identify the minimally acceptable standard for that industry. The pharmaceutical industry is no exception, with several trade groups (e.g. PhRMA, EFPIA, RQA, etc.) coming together to enhance collaboration. However, as noted by the Academy of Medical Sciences, there are increasingly complex and bureaucratic legal and ethical frameworks that innovators must work within to develop new medicines for patients.
The historical pharmaceutical QMS applies primarily to good manufacturing practice as described in existing ISO (International Organization for Standardization) and ICH (International Committee on Harmonization) guidelines. "Good Manufacturing Practices (GMP) relate to quality control and quality assurance enabling companies in the pharmaceutical sector to minimize or eliminate instances of contamination, mix-ups, |
https://en.wikipedia.org/wiki/Comparative%20medicine | Comparative medicine is a distinct discipline of experimental medicine that uses animal models of human and animal disease in translational and biomedical research. In other words, it relates and leverages biological similarities and differences among species to better understand the mechanism of human and animal disease. It has also been defined as a study of similarities and differences between human and veterinary medicine including the critical role veterinarians, animal resource centers, and Institutional Animal Care and Use Committees play in facilitating and ensuring humane and reproducible lab animal care and use. The discipline has been instrumental in many of humanity's most important medical advances.
History
The ancient world
The first documented mention of comparative pathology comes from Hippocrates (460 - 370 BCE) in Airs, Waters, Places where he describes relevant case histories for horse herds and human populations. He insists that diagnosis be based on experience, observation, and logic. Aristotle (384 - 322 BCE) hypothesized about interspecies transmission of disease. The anatomy and physiology schools opened in Alexandria by Erasistratus (404 - 320 BCE) and Herophilus (330 - 255 BCE) were directly inspired by Aristotle's work. Although most of the documents were destroyed when the Library of Alexandria burned.
In his Disciplinarum Libri IX, Marcus Terentius Varro (c. 100 BCE) made early indications of the germ theory of disease with his conception that tiny invisible animals carried with the air caused disease by entering through the nose and mouth. He also warned people against establishing homes near swamplands. Aulus Cornelius Celsus (25 BCE - 50 CE) wrote of experimental physiology in De Medicini Libri Octo detailing numerous dissections and vivisections he performed and pointed out specific interventions as well, such as cupping to remove the poison of a dog's bite.
By the time of Claudius Galen (129 - 200 CE), whose name lives on in t |
https://en.wikipedia.org/wiki/Sequence%20step%20algorithm | A sequence step algorithm (SQS-AL) is an algorithm implemented in a discrete event simulation system to maximize resource utilization. This is achieved by running through two main nested loops: A sequence step loop and a replication loop. For each sequence step, each replication loop is a simulation run that collects crew idle time for activities in that sequence step. The collected crew idle times are then used to determine resource arrival dates for user-specified confidence levels. The process of collecting the crew idle times and determining crew arrival times for activities on a considered sequence step is repeated from the first to the last sequence step.
See also
Computational resource
Linear scheduling method |
https://en.wikipedia.org/wiki/Fluorescence%20intensity%20decay%20shape%20microscopy | Within the scientific study of a molecule's color, Fluorescence intensity decay shape microscopy (FIDSAM) is a fluorescence microscope technique, which utilizes the time evolution of fluorescence emission after a pulsed excitation to analyse the decay statistics of an excited chromophore. The main application of FIDSAM is the discrimination of unspecific autofluorescent background signal from the target signal of a dedicated chromophore.
Principle
The FIDSAM method analyses the number of different molecules contributing to a measured fluorescence signal. Assuming a pure fluorescent dye solution in an isotropic surrounding, the individual emitters are indistinguishable. Accordingly, they obey the same fluorescence emission statistics and the time evolution of the fluorescence emission after a pulsed excitation can be described by a monoexponential decay function according to:
with = the initial fluorescence intensity after the excitation and = the decay constant (fluorescence lifetime).
In contrast, autofluorescent background consists of a multitude of individual emitters, which obey individual emission statistics. Accordingly, the time evolution samples a summation of numerous individual decay statistics and can be written as:
.
The FIDSAM technique bases on a time correlated single photon counting (TCSPC) measurement and analyses the degree of deviation of a recorded fluorescence decay from a monoexponential behavior. This is achieved by fitting the recorded fluorescence intensity decay by a monoexponential decay function convoluted with the instrument response function. In a next step, the error value of the fitting procedure, , is extracted and its inverse value is multiplied with the original intensity value. This way, fluorescence signal, which originates from autofluorescence background and therefore exhibits increased error-values, is divided by a relatively large number, whereas fluorescence signal from target molecules exhibits small error-values ar |
https://en.wikipedia.org/wiki/Photomedicine | Photomedicine is an interdisciplinary branch of medicine that involves the study and application of light with respect to health and disease. Photomedicine may be related to the practice of various fields of medicine including dermatology, surgery, interventional radiology, optical diagnostics, cardiology, circadian rhythm sleep disorders and oncology.
A branch of photomedicine is light therapy in which bright light strikes the retinae of the eyes, used to treat circadian rhythm disorders and seasonal affective disorder (SAD). The light can be sunlight or from a light box emitting white or blue (blue/green) light.
Examples
Photomedicine is used as a treatment for many different conditions:
PUVA for the treatment of psoriasis
Photodynamic therapy (PDT) for treatment of cancer and macular degeneration - Nontoxic light-sensitive compounds are targeted to malignant or other diseased cells, then exposed selectively to light, whereupon they become toxic and destroy these cells phototoxicity. One dermatological example of PDT is the targeting malignant cells by bonding the light-sensitive compounds to antibodies to these cells; light exposure at particular wavelengths mediates release of free radicals or other photosensitizing agents, destroying the targeted cells.
Treating circadian rhythm disorders
Alopecia, pattern hair loss, etc.
Free electron laser
Laser hair removal
IPL
Photobiomodulation
Optical diagnostics, for example optical coherence tomography of coronary plaques using infrared light
Confocal microscopy and fluorescence microscopy of in vivo tissue
Diffuse reflectance infrared fourier transform for in vivo quantification of pigments (normal and cancerous), and hemoglobin
Perpendicular-polarized flash photography and fluorescence photography of the skin
See also
Blood irradiation therapy
Aesthetic medicine
Laser hair removal
Laser medicine
Rox Anderson |
https://en.wikipedia.org/wiki/Astronomical%20Calculation%20Institute%20%28Heidelberg%20University%29 | The Astronomical Calculation Institute (; ARI) is a research institute in Heidelberg, Germany, dating from the 1700s. Beginning in 2005, the ARI became part of the Center for Astronomy at Heidelberg University (, ). Previously, the institute directly belonged to the state of Baden-Württemberg.
Description
The ARI has a rich history. It was founded in 1700 in Berlin-Dahlem by Gottfried Kirch. It had its origin in a patent application by Frederick I of Prussia, who introduced a monopoly on publishing star catalogs in Prussia. In 1945 the Institute was moved by the Americans nearer to the United States Army Garrison Heidelberg. On January 1, 2005 the combined Center for Astronomy institute formed by combining ARI, with the Institute of Theoretical Astrophysics (, ITA) and the Landessternwarte Heidelberg-Königstuhl ("Heidelberg-Königstuhl State Observatory", LSW).
The ARI has been responsible among other things for the Gliese catalog of nearby stars, the fundamental catalogs FK5 and FK6, and the annually-published "Apparent Places of Fundamental Stars" (APFS), stellar ephemerides that provide high-precision mean and apparent positions of over three thousand stars for each day.
During 1938–1945, whilst based in Berlin, ARI published the academic journal Astronomical Notes ().
, ARI was not limited to only publishing star catalogs, but has a wider research scope, including gravitational lensing, galaxy evolution, stellar dynamics, and cosmology. ARI is also involved in space astronomy missions including the Gaia mission.
In 2007 professors Eva K. Grebel and Joachim Wambsganß (de) became co-directors of the institute.
Other researchers involved with the institute include Hartmut Jahreiß author of the updated Gliese Catalogue of Nearby Stars; Eugene Rabe; Lutz D. Schmadel, author of the Dictionary of Minor Planet Names; Hans Scholl; and Rainer Spurzem working with N-body simulations.
Directors
Between 1700 and 2007 there was a single director of the institute at |
https://en.wikipedia.org/wiki/Hog-Morse | Hog-Morse was telegraphers' jargon for the tendency of inexperienced telegraph operators to make errors when sending or receiving in Morse code. The term was current in the United States during the period when American Morse code was still in use.
It is so called after one example (here given in International Morse but most likely originating in American Morse):
() becomes (), with just one subtle error in timing.
Examples
The now-defunct American Morse ("railroad code") is different from the International Morse Code currently in use for radio telegraphy. With American Morse it was far more difficult to avoid timing errors, because there were more symbol timings than there are in International Morse and some were difficult to distinguish because of their closeness; International Code has only two symbols, dots () and dashes (), but the American code had three lengths of dash and two lengths of spaces between dots.
For example, the dashes used for "L" () and "T" () in American Morse are distinct.
Also, in International Morse the space between symbols within a character is always the same, but American Morse has two different spaces. For example, the letters "S" (), "C" (), and "R" () all consist of three dots, but with slightly different timing between the dots in each case.
A frequently quoted, but possibly apocryphal, story from the historical period concerns the similarity of "L" () and "T" () in the American code. A company in Richmond, Virginia received a request for quotation for a load of (rough sawn wood intended for the manufacture of barrels), but the telegraph operator had sent () instead of () thus sending an order for . The company replied reminding the customer that slavery had been abolished.
Another American Morse example given in the literature is becoming . One commentator has called this the 19th century autocorrect. |
https://en.wikipedia.org/wiki/Union%20of%20Brewery%20and%20Mill%20Workers | The Union of Brewery and Mill Workers and Kindred Trades () was a trade union representing workers in the food and drink processing industry in Germany.
The union was founded on 1 October 1910, when the Central Union of Brewery Workers merged with the German Mill Workers' Union. The brewers dominated the new union, which adopted its constitution and structure. Like its predecessors, the union affiliated to the General Commission of German Trade Unions, and it was also a leading member of the International Secretariat of Brewery Workers. In 1919, the union was a founding affiliate of the General German Trade Union Confederation.
In 1922, the union renamed itself as the Union of Food and Beverage Workers. By 1927, the union had 74,443 members. On 24 September, it merged with the Central Union of Bakers and Confectioners, the Central Union of Butchers, and the Union of Coopers, Cellar Managers, and Helpers in Germany, to form the Union of Food and Drink Workers.
Presidents
1910: Martin Etzel
1914: Eduard Backert |
https://en.wikipedia.org/wiki/Cogan%20syndrome | Cogan syndrome (also Cogan's syndrome) is a rare disorder characterized by recurrent inflammation of the front of the eye (the cornea) and often fever, fatigue, and weight loss, episodes of vertigo (dizziness), tinnitus (ringing in the ears) and hearing loss. It can lead to deafness or blindness if untreated. The classic form of the disease was first described by D. G. Cogan in 1945.
Signs and symptoms
Cogan syndrome is a rare, rheumatic disease characterized by inflammation of the ears and eyes. Cogan syndrome can lead to vision difficulty, hearing loss and dizziness. The condition may also be associated with blood-vessel inflammation (called vasculitis) in other areas of the body that can cause major organ damage in 15% of those affected or, in a small number of cases, even death. It most commonly occurs in a person's 20s or 30s. The cause is not known. However, one theory is that it is an autoimmune disorder in which the body's immune system mistakenly attacks tissue in the eye and ear.
Causes
It is currently thought that Cogan syndrome is an autoimmune disease. The inflammation in the eye and ear are due to the patient's own immune system producing antibodies that attack the inner ear and eye tissue. Autoantibodies can be demonstrated in the blood of some patients, and these antibodies have been shown to attack inner ear tissue in laboratory studies. Infection with the bacteria Chlamydia pneumoniae has been demonstrated in some patients prior to the development of Cogan syndrome, leading some researchers to hypothesize that the autoimmune disease may be initiated by the infection. C. pneumoniae is a common cause of mild pneumonia, and the vast majority of patients who are infected with the bacteria do not develop Cogan syndrome.
Diagnosis
While the white blood cell count, erythrocyte sedimentation rate, and C-reactive protein tests may be abnormal and there may be abnormally high levels of platelets in the blood or too few red blood cells in the blood, none o |
https://en.wikipedia.org/wiki/Basic%20leucine%20zipper%20and%20W2%20domain-containing%20protein%202 | Basic Leucine Zipper and W2 Domain-Containing Protein 2 is a protein that is encoded by the BZW2 gene. It is a eukaryotic translation factor found in species up to bacteria. In animals, it is localized in the cytoplasm and expressed ubiquitously throughout the body. The heart, placenta, skeletal muscle, and hippocampus show higher expression. In various cancers, upregulation tends to lead to higher severity and mortality. It has been found to interact with SARS-CoV-2.
Gene
BZW2 is known as Basic Leucine Zipper W2 Domain-Containing Protein 2, MST017, MSTP017, 5MP1, Eukaryotic Translation Factor 5, and HSPC028. It is located on chromosome 7 at p21.1 on the plus strand. The gene spans 60,389 base pairs, at coordinates 16,583,248 – 16,804,999. There are 12 exons.
Protein
There are two known isoforms of BZW2. Isoform 1 is 419 amino acids long and is the most abundant form. Isoform 2 is 225 amino acids, containing only 11 exons and a shorter N-terminus.
The coded protein is 419 amino acids long and weighs 48.3 kDa. As described in the name, the protein contains a leucine-zipper motif. Four “L……” repeats are present in the beginning, giving rise to the characteristic leucine zipper helix within the 3D structure. An eIF5C domain follows the leucine motif, which is a part of proteins that are important for strict regulation of cellular processes.
The amino acid composition of BZW2 has a higher amount of lysines and a lower amount of prolines in humans but a higher glutamic acid composition in its orthologs. The human BZW2 protein has an overall charge of -3 which can go down to -9 in orthologs. There are no significant charge clusters. There is also a KELQ repeat that has remained conserved in animals.
The secondary structure contains a majority of alpha helices. There are 19 alpha helices in all orthologs, except for two additional beta sheets which are absent in humans. The tertiary structure forms a repeated fold of alpha-helices, a structure that is conserved t |
https://en.wikipedia.org/wiki/Arthur%20T.%20Benjamin | Arthur T. Benjamin (born March 19, 1961) is an American mathematician who specializes in combinatorics. Since 1989 he has been a professor of mathematics at Harvey Mudd College, where he is the Smallwood Family Professor of Mathematics.
He is known for mental math capabilities and "Mathemagics" performances in front of live audiences. His mathematical abilities have been highlighted in newspaper and magazine articles, at TED Talks and on the Colbert Report.
Education
Benjamin earned a Bachelor of Science with highest honors in applied mathematics at Carnegie Mellon University in 1983. He then went on to receive a Master of Science in Engineering in 1985 and a Doctor of Philosophy in 1989 in mathematical sciences at Johns Hopkins University. His PhD dissertation was titled "Turnpike Structures for Optimal Maneuvers", and was supervised by Alan J. Goldman.
During his freshman year at CMU he wrote the lyrics and created the magic effects for the musical comedy, Kije!, in collaboration with author Scott McGregor and composer Arthur Darrell Turner. This musical was the winner of an annual competition and was first performed as the CMU's Spring Musical in 1980.
Career
Academic
Benjamin held several mathematics positions while attending university, including stints with the National Bureau of Standards, the National Security Agency, and the Institute for Defense Analyses. Upon receipt of his PhD he was hired as an assistant professor of mathematics at Harvey Mudd College. He is currently a full professor at Harvey Mudd and was chair of the mathematics department from 2002 to 2004. He has published over 90 academic papers and five books. He has also filmed several sets of lectures on mathematical topics for The Great Courses series from The Teaching Company, including a course on Discrete Mathematics, Mental Math, and The Mathematics of Games and Puzzles: From Cards to Sudoku. He served as co-editor of Math Horizons magazine for five years.
Mathemagics
Benjamin has |
https://en.wikipedia.org/wiki/Gray%20ramus%20communicans | Each spinal nerve receives a branch called a gray ramus communicans (: rami communicantes) from the adjacent paravertebral ganglion of the sympathetic trunk. The gray rami communicantes contain postganglionic nerve fibers of the sympathetic nervous system and are composed of largely unmyelinated neurons. This is in contrast to the white rami communicantes, in which heavily myelinated neurons give the rami their white appearance.
Function
Preganglionic sympathetic fibers from the intermediolateral nucleus in the lateral grey column of the spinal cord are carried in the white ramus communicans to the paravertebral ganglia of the sympathetic trunk. Once the preganglionic nerve has traversed a white ramus communicans, it can do one of three things.
The preganglionic neuron can synapse with a postganglionic sympathetic neuron in the sympathetic paravertebral ganglion at that level. From here, the postganglionic sympathetic neuron can travel back out the grey ramus communicans of that level to the mixed spinal nerve and onto the effector organ.
The preganglionic neuron can travel superiorly or inferiorly to a sympathetic paravertebral ganglion of a higher or lower level where it can synapse with a postganglionic sympathetic neuron. From here, the postganglionic sympathetic neuron can travel back out the grey ramus communicans of that level to the mixed spinal nerve and on to an effector organ.
The preganglionic neuron can pass through the paravertebral ganglion without synapsing, and therefore continue as a preganglionic nerve fiber (Splanchnic nerves) until it reaches a distant collateral ganglion anterior to the vertebral column (Prevertebral ganglia). Once inside the prevertebral ganglia, the individual neurons comprising the nerve synapse with their postganglionic neuron. The postganglionic nerve then proceeds to innervate its targets (pelvic visceral organs) It will generally be responsible for the innervation of the pelvic viscera.
Ganglionic influence can be |
https://en.wikipedia.org/wiki/Yips | In sports, the yips are a sudden and unexplained loss of ability to execute certain skills in experienced athletes. Symptoms of the yips are losing fine motor skills and psychological issues that impact on the muscle memory and decision-making of athletes, leaving them unable to perform basic skills of their sport.
Common treatments include clinical sport psychology therapy as well as refocusing attention on the underlying biomechanics of their physical actions. The impact varies widely. A yips event may last a short time before the athlete regains their composure or it can require longer term adjustments to technique before recovery occurs. The worst cases are those where the athlete does not recover at all, forcing the player to abandon the sport at the highest level.
In golf
In golf, the yips is a movement disorder known to interfere with putting. The term yips is said to have been popularized by Tommy Armour—a golf champion and later golf teacher—to explain the difficulties that led him to abandon tournament play. In describing the yips, golfers have used terms such as twitches, staggers, jitters and jerks. The yips affects between a quarter and a half of all mature golfers. Researchers at the Mayo Clinic found that 33% to 48% of all serious golfers have experienced the yips. Golfers who have played for more than 25 years appear most prone to the condition.
Although the exact cause of the yips has yet to be determined, one possibility is biochemical changes in the brain that accompany aging. Excessive use of the involved muscles and intense demands of coordination and concentration may exacerbate the problem. Giving up golf for a month sometimes helps. Focal dystonia has been mentioned as another possibility for the cause of yips.
Professional golfers seriously afflicted by the yips include Ernie Els, David Duval, Pádraig Harrington, Bernhard Langer, Ben Hogan, Harry Vardon, Sam Snead, Ian Baker-Finch and Keegan Bradley, who missed a six-inch putt in the fi |
https://en.wikipedia.org/wiki/Hattons%20Model%20Railways | Hattons Model Railways is a British retailer and manufacturer of model railway paraphernalia founded in Liverpool, England in 1946 by Norman Hatton (1918-2005).
After significant growth due to a move into online mail order the company relocated to Widnes, Cheshire in January 2016 and is still part owned by Norman Hatton's daughter Christine Hatton.
Early Years
After leaving the army in 1946, Norman Hatton opened a shop at 142 Smithdown Road in Liverpool. His idea was to sell things that people found hard to get after the war. He sold bric 'n' brac, fireworks, household items such as firewood and almost anything he could find, including gas masks. He discovered that model locomotives and toys were his best selling items and he began to focus his attention on purchasing as much stock of these as he could, mainly driving around Liverpool looking for second hand stock to sell on and by the 1950s Hattons was on its way to becoming a fully fledged model shop.
1950s - 1990s
By 1958 Norman realised that he needed bigger premises. He remained on Smithdown Road and moved into number 180 that year. It was around this time that Meccano wanted to offload a large quantity of stock and they sold the lot to Hattons for a fraction of what they were worth with Norman being quoted in a 2001 interview as saying some items took 20 years to sell out due to the amount he had purchased.
Norman placed extensive adverts in the model railway press and the company grew throughout the 1960s, 1970s and the 1980s with the help of mail order customers from all over the world sending Norman regular business.
In 1999 Hattons launched their first website, a directory of the items that they had for sale to make it easier for people wanting to place orders via post or over the telephone to view the entire stock in one place. The same year number 182 Smithdown Road was purchased and the store expanded into both buildings.
Norman semi retired in 1998 leaving the running of the business to his t |
https://en.wikipedia.org/wiki/Sanyo | is a Japanese electronics manufacturer founded in 1947 by Toshio Iue, the brother-in-law of Kōnosuke Matsushita, the founder of Panasonic. Iue left Matsushita Electric Industrial (now Panasonic) to start his own business, acquiring some of its equipment to produce bicycle generator lamps. In 1950, the company was established. Sanyo began to diversify in the 1960s, launching Japan's first spray-type washing machine in 1953. In the 2000s, it was known as one of the 3S along with Sony and Sharp. Sanyo also focused on solar cell and lithium battery businesses. In 1992, it developed the world's first hybrid solar cell, and in 2002, it had a 41% share of the global lithium-ion battery market. In its heyday in 2003, Sanyo had sales of about ¥2.5 trillion. However, it fell into a financial crisis as a result of its huge investment in the semiconductor business. In 2009, Sanyo was acquired by Panasonic, and in 2011, it was fully consolidated into Panasonic and its brand disappeared. The company still exists as a legal entity for the purpose of winding up its affairs.
History
Beginnings
Sanyo was founded when Toshio Iue the brother-in-law of Konosuke Matsushita and also a former Matsushita employee, was lent an unused Matsushita plant in 1947 and used it to make bicycle generator lamps. Sanyo was incorporated in 1949; in 1952 it made Japan's first plastic radio and in 1954 Japan's first pulsator-type washing machine. The company's name means three oceans in Japanese, referring to the founder's ambition to sell their products worldwide, across the Atlantic, Pacific, and Indian oceans.
Sanyo in America
In 1969 Howard Ladd became the Executive Vice President and COO of Sanyo Corporation. Ladd introduced the Sanyo brand to the United States in 1970. The ambition to sell Sanyo products worldwide was realized in the mid-1970s after Sanyo introduced home audio equipment, car stereos and other consumer electronics to the North American market. The company embarked on a heavy tel |
https://en.wikipedia.org/wiki/Decomposition%20matrix | In mathematics, and in particular modular representation theory, a decomposition matrix is a matrix that results from writing the irreducible ordinary characters in terms of the irreducible modular characters, where the entries of the two sets of characters are taken to be over all conjugacy classes of elements of order coprime to the characteristic of the field. All such entries in the matrix are non-negative integers. The decomposition matrix, multiplied by its transpose, forms the Cartan matrix, listing the composition factors of the projective modules. |
https://en.wikipedia.org/wiki/Girolami%20method | The Girolami method, named after Gregory Girolami, is a predictive method for estimating densities of pure liquid components at room temperature. The objective of this method is the simple prediction of the density and not high precision.
Procedure
The method uses purely additive volume contributions for single atoms and additional correction factors for components with special functional groups which cause a volume contraction and therefore a higher density. The Girolami method can be described as a mixture of an atom and group contribution method.
Atom contributions
The method uses the following contributions for the different atoms:
A scaled molecular volume is calculated by
and the density is derived by
with the molecular weight M. The scaling factor 5 is used to obtain the density in g·cm−3.
Group contribution
For some components Girolami found smaller volumes and higher densities than calculated solely by the atom contributions. For components with
a hydroxylic function (Alcohols)
a carboxylic function (Carboxylic acids)
a primary or secondary amine function
an amide group (incl. amides substituted at the nitrogen)
a sulfoxide group
a sulfone group
a ring (non-condensed),
it is sufficient to add 10% to the density obtained by the main equation. For sulfone groups it is necessary to use this factor twice (20%).
Another specific case are condensed ring systems like Naphthalene. The density has to increased by 7.5% for every ring; for Naphthalene the resulting factor would be 15%.
If multiple corrections are needed their factors have to be added but not over 130% in total.
Example calculation
Quality
The author has given a mean quadratic error (RMS) of 0.049 g·cm−3 for 166 checked components. Only for two components (acetonitrile and dibromochloromethane) has an error greater than 0.1 g·cm −3 been found. |
https://en.wikipedia.org/wiki/Haken-Kelso-Bunz%20model | The Haken-Kelso-Bunz (HKB) is a theoretical model of motor coordination originally formulated by Hermann Haken, J. A. Scott Kelso and H. Bunz. The model attempts to provide the framework for understanding coordinated behavior in living things. It accounts for experimental observations on human bimanual coordination that revealed fundamental features of self-organization: multistability, and phase transitions (switching). HKB is one of the most extensively tested quantitative models in the field of human movement behavior.
Phase Transitions ('Switches')
The HKB model differs from other motor coordination models with the addition of phase transitions (‘switches’). Kelso initially observed this phenomenon while conducting an experiment looking at subjects’ finger movements. Subjects oscillated their fingers rhythmically in the transverse plane (i.e., abduction-adduction) in one of two patterns, parallel or anti-parallel. In the parallel pattern, the finger muscles contract in an alternating fashion; in the anti-parallel pattern, the homologous finger muscles contract simultaneously. Kelso's study observed that when the subject begins in the parallel mode and increases the speed of movement, a spontaneous switch to symmetrical, anti-parallel movement occurs. This transition happens swiftly at a certain critical frequency. Surprisingly, after the switch has occurred and the movement rate decreases, Kelso's subjects remain in the symmetrical model (did not switch back). Kelso's study indicates that while humans are able to produce two patterns at low frequency values, only one—the symmetrical, anti-parallel mode remains stable as frequency is scaled beyond a critical value.
Prediction
The HKB model states that dynamic instability causes switching to occur. HKB measures stability in the following ways:
1. Critical slowing down. If a perturbation is applied to a system that takes it away from its stationary state, the time for a system to return to the stationary state |
https://en.wikipedia.org/wiki/ProQuest%20Dialog | Dialog is an online information service owned by ProQuest, who acquired it from Thomson Reuters in mid-2008.
Dialog was one of the predecessors of the World Wide Web as a provider of information, though not in form. The earliest form of the Dialog system was completed in 1966 in Lockheed Martin under the direction of Roger K. Summit. According to its literature, it was "the world's first online information retrieval system to be used globally with materially significant databases". In the 1980s, a low-priced dial-up version of a subset of Dialog was marketed to individual users as Knowledge Index. This subset included INSPEC, MathSciNet, over 200 other bibliographic and reference databases, as well as third-party retrieval vendors who would go to physical libraries to copy materials for a fee and send it to the service subscriber.
While being owned by the Thomson Corporation, Dialog consisted of the Dialog, DataStar, Profound, and NewsEdge businesses. Dialog and DataStar were consolidated into Dialog. The news content from Profound and NewsEdge were consolidated, and the market research business from Profound was sold to MarketResearch.com. The NewsEdge business was eventually sold to Acquire Media, now Naviga. Prior to being owned by Thomson, MAID purchased Knight-Ridder Information which included the Dialog and DataStar businesses. MAID renamed itself to be the Dialog Corporation.
See also
Colorado Alliance of Research Libraries |
https://en.wikipedia.org/wiki/Alopecia%20areata | Alopecia areata, also known as spot baldness, is a condition in which hair is lost from some or all areas of the body. It often results in a few bald spots on the scalp, each about the size of a coin. Psychological stress and illness are possible factors in bringing on alopecia areata in individuals at risk, but in most cases there is no obvious trigger. People are generally otherwise healthy. In a few cases, all the hair on the scalp is lost (alopecia totalis), or all body hair is lost (alopecia universalis). Hair loss can be permanent, or temporary. It is distinct from pattern hair loss, which is common among males.
Alopecia areata is believed to be an autoimmune disease resulting from a breach in the immune privilege of the hair follicles. Risk factors include a family history of the condition. Among identical twins, if one is affected, the other has about a 50% chance of also being affected. The underlying mechanism involves failure by the body to recognize its own cells, with subsequent immune-mediated destruction of the hair follicle.
No cure for the condition is known. Some treatments, particularly triamcinolone injections and 5% minoxidil topical creams, are effective in speeding hair regrowth. Sunscreen, head coverings to protect from cold and sun, and glasses, if the eyelashes are missing, are also recommended. In more than 50% of cases of sudden-onset localized "patchy" disease, hair regrows within a year. In patients with only one or two patches, this one-year recovery will occur in up to 80%. However, most patients will have more than one episode over the course of a lifetime. In many patients, hair loss and regrowth occurs simultaneously over the course of several years. Among those in whom all body hair is lost, fewer than 10% recover.
About 0.15% of people are affected at any one time, and 2% of people are affected at some point in time. Onset is usually in childhood. Females are affected at higher rates than males.
Signs and symptoms
Typical f |
https://en.wikipedia.org/wiki/Gassmann%27s%20equation | The Gassmann equation, first described by Fritz Gassmann, is used in geophysics and its relations are receiving more attention as seismic data are increasingly used for reservoir monitoring. The Gassmann equation is the most common way of performing a fluid substitution model from one known parameter.
Procedure
These formulations are from Avseth et al. (2006).
Given an initial set of velocities and densities, , , and corresponding to a rock with an initial set of fluids, you can compute the velocities and densities of the rock with another set of fluid. Often these velocities are measured from well logs, but might also come from a theoretical model.
Step 1: Extract the dynamic bulk and shear moduli from , , and :
Step 2: Apply Gassmann's relation, of the following form, to transform the saturated bulk modulus:
where and are the rock bulk moduli saturated with fluid 1 and fluid 2, and are the bulk moduli of the fluids themselves, and is the rock's porosity.
Step 3: Leave the shear modulus unchanged (rigidity is independent of fluid type):
Step 4: Correct the bulk density for the change in fluid:
Step 5: recompute the fluid substituted velocities
Rearranging for Ksat
Given
Let
and
then
Or, expanded
Assumptions
Load induced pore pressure is homogeneous and identical in all pores
This assumption imply that shear modulus of the saturated rock is the same as the shear modulus of the dry rock, .
Porosity does not change with different saturating fluids
Gassmann fluid substitution requires that the porosity remain constant. The assumption being that, all other things being equal, different saturating fluids should not affect the porosity of the rock. This does not take into account diagenetic processes, such as cementation or dissolution, that vary with changing geochemical conditions in the pores. For example, quartz cement is more likely to precipitate in water-filled pores than it is in hydrocarbon-filled ones (Worden and Morad, 2000). So the same |
https://en.wikipedia.org/wiki/Hilbert%20manifold | In mathematics, a Hilbert manifold is a manifold modeled on Hilbert spaces. Thus it is a separable Hausdorff space in which each point has a neighbourhood homeomorphic to an infinite dimensional Hilbert space. The concept of a Hilbert manifold provides a possibility of extending the theory of manifolds to infinite-dimensional setting. Analogously to the finite-dimensional situation, one can define a differentiable Hilbert manifold by considering a maximal atlas in which the transition maps are differentiable.
Properties
Many basic constructions of the manifold theory, such as the tangent space of a manifold and a tubular neighbourhood of a submanifold (of finite codimension) carry over from the finite dimensional situation to the Hilbert setting with little change. However, in statements involving maps between manifolds, one often has to restrict consideration to Fredholm maps, that is, maps whose differential at every point is Fredholm. The reason for this is that Sard's lemma holds for Fredholm maps, but not in general. Notwithstanding this difference, Hilbert manifolds have several very nice properties.
Kuiper's theorem: If is a compact topological space or has the homotopy type of a CW complex then every (real or complex) Hilbert space bundle over is trivial. In particular, every Hilbert manifold is parallelizable.
Every smooth Hilbert manifold can be smoothly embedded onto an open subset of the model Hilbert space.
Every homotopy equivalence between two Hilbert manifolds is homotopic to a diffeomorphism. In particular every two homotopy equivalent Hilbert manifolds are already diffeomorphic. This stands in contrast to lens spaces and exotic spheres, which demonstrate that in the finite-dimensional situation, homotopy equivalence, homeomorphism, and diffeomorphism of manifolds are distinct properties.
Although Sard's Theorem does not hold in general, every continuous map from a Hilbert manifold can be arbitrary closely approximated by a smooth map |
https://en.wikipedia.org/wiki/Clinical%20handover | Clinical handover, patient handover or handover is the transfer of professional responsibility and accountability for some or all aspects of care for a patient, or group of patients, to another person or professional group on a temporary or permanent basis. Failure in handover is a major source in preventable patient harm. Clinical handover is an international concern and Australia and the United Kingdom have reviewed this and developed risk reduction recommendations. Some strategies to improve handover include bedside handover, using SBAR and using computerised handover sheets
See also
Change-of-shift report (nursing) |
https://en.wikipedia.org/wiki/Supersymmetric%20WKB%20approximation | In physics, the supersymmetric WKB (SWKB) approximation is an extension of the WKB approximation that uses principles from supersymmetric quantum mechanics to provide estimations on energy eigenvalues in quantum-mechanical systems. Using the supersymmetric method, there are potentials that can be expressed in terms of a superpotential, , such that
The SWKB approximation then writes the Born–Sommerfeld quantization condition from the WKB approximation in terms of .
The SWKB approximation for unbroken supersymmetry, to first order in is given by
where is the estimate of the energy of the -th excited state, and and are the classical turning points, given by
The addition of the supersymmetric method provides several appealing qualities to this method. First, it is known that, by construction, the ground state energy will be exactly estimated. This is an improvement over the standard WKB approximation, which often has weaknesses at lower energies. Another property is that a class of potentials known as shape invariant potentials have their energy spectra estimated exactly by this first-order condition.
See also
Quantum mechanics
Supersymmetric quantum mechanics
Supersymmetry
WKB approximation |
https://en.wikipedia.org/wiki/Mathematical%20Optimization%20Society | The Mathematical Optimization Society (MOS), known as the Mathematical Programming Society until 2010, is an international association of researchers active in optimization. The MOS encourages the research, development, and use of optimization—including mathematical theory, software implementation, and practical applications (operations research).
Founded in , the MOS has several activities: Publishing journals and a newsletter, organizing and cosponsoring conferences, and awarding prizes.
History
In the 1960s, mathematical programming methods were gaining increasing importance both in mathematical theory and in industrial application. To provide a discussion forum for researchers in the field arose, the journal Mathematical Programming was founded in 1970.
Based on activities by George Dantzig, Albert Tucker, Philip Wolfe and others, the MOS was founded in 1973, with George Dantzig as its first president.
Activities
Conferences
Several conferences are organized or co-organized by the Mathematical Optimization Society, for instance:
The International Symposium on Mathematical Programming (ISMP), organized every three years, is open to all fields of mathematical programming.
The Integer Programming and Combinatorial Optimization (IPCO) conference, in Integer programming, is held in those years when there is no ISMP.
The International Conference on Continuous Optimization (ICCOPT), the continuous analog of the IPCO conference, was first held in 2004.
The International Conference on Stochastic Programming (ICSP) takes place every three years and is devoted to optimization using uncertain input data.
The Nordic MOS conference is a biannual meeting of researchers from Scandinavia working in all fields of optimization.
At the Université de Montréal, annual seminars on changing topics are organized by the MOS.
Journals and other publications
There are several publications by the Mathematical Optimization Society:
The journal Mathematical Programming (serie |
https://en.wikipedia.org/wiki/PhysicsOverflow | PhysicsOverflow is a physics website that serves as a post-publication open peer review platform for research papers in physics, as well as a collaborative blog and online community of physicists. It allows users to ask, answer and comment on graduate-level physics questions, post and review manuscripts from ArXiv (which lists PhysicsOverflow discussion pages among its trackbacks) and other sources, and vote on both forms of content.
In addition to the two primary forms of content, the PhysicsOverflow community also welcomes discussions on unsolved problems, and hosts a chat section for discussions on topics generally of interest to physicists and students of physics, such as those related to recent events in physics, physics academia, and the publishing process.
History
PhysicsOverflow was started in April 2014 as a physics-equivalent of MathOverflow by Rahel Knöpfel, a physics PhD at the University of Rostock, high-school student Abhimanyu Pallavi Sudhir, and Roger Cattin, a retired professor of computer science at the University of Applied Sciences, Switzerland. The site was initially a mere question-and-answer forum, as it was started by users dissatisfied by the policies of the Physics Stack Exchange, but it was eventually expanded to include a Reviews section in October 2014.
Moderation practices
PhysicsOverflow is well-known for its liberal moderation policy and hesitation to block contributors except for spam, as reflected in the website's bill of "user rights". The content is largely community-moderated, much like MathOverflow, although exceptions have been recorded.
Although the site's moderation policy is publicly available as part of the moderator manual, the site has been criticised for the excessive dispersion of policy-related material, such as the FAQ, the Bill of Rights, the moderator list and the Community Moderation threads, leading to reduced transparency. In response, the site's administrators posted a bulletin of all moderation-related cont |
https://en.wikipedia.org/wiki/Arthur%E2%80%93Selberg%20trace%20formula | In mathematics, the Arthur–Selberg trace formula is a generalization of the Selberg trace formula from the group SL2 to arbitrary reductive groups over global fields, developed by James Arthur in a long series of papers from 1974 to 2003. It describes the character of the representation of on the discrete part of in terms of geometric data, where is a reductive algebraic group defined over a global field and is the ring of adeles of F.
There are several different versions of the trace formula. The first version was the unrefined trace formula, whose terms depend on truncation operators and have the disadvantage that they are not invariant. Arthur later found the invariant trace formula and the stable trace formula which are more suitable for applications. The simple trace formula is less general but easier to prove. The local trace formula is an analogue over local fields.
Jacquet's relative trace formula is a generalization where one integrates the kernel function over non-diagonal subgroups.
Notation
F is a global field, such as the field of rational numbers.
A is the ring of adeles of F.
G is a reductive algebraic group defined over F.
The compact case
In the case when is compact the representation splits as a direct sum of irreducible representations, and the trace formula is similar to the Frobenius formula for the character of the representation induced from the trivial representation of a subgroup of finite index.
In the compact case, which is essentially due to Selberg, the groups G(F) and G(A) can be replaced by any
discrete subgroup of a locally compact group with compact. The group acts on the space of functions on
by the right regular representation , and this extends to an action of the group ring of , considered as the ring of functions on . The character of this representation is given by a generalization of the Frobenius formula as follows.
The action of a function on a function on is given by
In other words, is an integral o |
https://en.wikipedia.org/wiki/Grain%20per%20gallon | The grain per gallon (gpg) is a unit of water hardness defined as 1 grain (64.8 milligrams) of calcium carbonate dissolved in 1 US gallon of water (3.785412 L). It translates into 1 part in about 58,000 parts of water or 17.1 parts per million (ppm). Also called Clark degree (in terms of an imperial gallon).
Usage
Calcium and magnesium ions present as sulfates, chlorides, carbonates and bicarbonates cause water to be hard. Water chemists measure water impurities in parts per million (ppm). For understandability, hardness ordinarily is expressed in grains of hardness per gallon of water (gpg). The two systems can be converted mathematically.
Measurement of water hardness
According to the Water Quality Association:
soft: 0-3.5 grains per gallon (gpg)
moderate: 3.5-7.0 gpg
hard: 7.0-10.5 gpg
very hard: over 10.5 gpg
Conversions
1 gpg = 0.017118061 kg/m3
1 gpg = 1 pound per 7000 gallons
1 gpg = 17.12 mg/L (ppm)
1 Clark degree = 0.8327 gpg
1 dGH = 1.042645169 gpg
1 mg/L (ppm) = 0.058417831 grains per US gallon
See also
Degrees of General Hardness |
https://en.wikipedia.org/wiki/Dirac%20matter | The term Dirac matter refers to a class of condensed matter systems which can be effectively described by the Dirac equation. Even though the Dirac equation itself was formulated for fermions, the quasi-particles present within Dirac matter can be of any statistics. As a consequence, Dirac matter can be distinguished in fermionic, bosonic or anyonic Dirac matter. Prominent examples of Dirac matter are Graphene, topological insulators, Dirac semimetals, Weyl semimetals, various high-temperature superconductors with -wave pairing and liquid Helium-3. The effective theory of such systems is classified by a specific choice of the Dirac mass, the Dirac velocity, the Dirac matrices and the space-time curvature. The universal treatment of the class of Dirac matter in terms of an effective theory leads to a common features with respect to the density of states, the heat capacity and impurity scattering.
Definition
Members of the class of Dirac matter differ significantly in nature. However, all examples of Dirac matter are unified by similarities within the algebraic structure of an effective theory describing them.
General
The general definition of Dirac matter is a condensed matter system where the quasi-particle excitations can be described in curved spacetime by the generalised Dirac equation:
In the above definition denotes a covariant vector depending on the -dimensional momentum ( space time dimension), is the vierbein describing the curvature of the space, the quasi-particle mass and the Dirac velocity. Note that since in Dirac matter the Dirac equation gives the effective theory of the quasiparticles, the energy from the mass term is , not the rest mass of a massive particle. refers to a set of Dirac matrices, where the defining for the construction is given by the anticommutation relation,
is the Minkowski metric with signature (+ - - -) and is the -dimensional unit matrix.
In all equations, implicit summation over and is used (Einstein conventi |
https://en.wikipedia.org/wiki/Bacterial%20genetics | Bacterial genetics is the subfield of genetics devoted to the study of bacterial genes. Bacterial genetics are subtly different from eukaryotic genetics, however bacteria still serve as a good model for animal genetic studies. One of the major distinctions between bacterial and eukaryotic genetics stems from the bacteria's lack of membrane-bound organelles (this is true of all prokaryotes. While it is a fact that there are prokaryotic organelles, they are never bound by a lipid membrane, but by a shell of proteins), necessitating protein synthesis occur in the cytoplasm.
Like other organisms, bacteria also breed true and maintain their characteristics from generation to generation, yet at the same time, exhibit variations in particular properties in a small proportion of their progeny. Though heritability and variations in bacteria had been noticed from the early days of bacteriology, it was not realised then that bacteria too obey the laws of genetics. Even the existence of a bacterial nucleus was a subject of controversy. The differences in morphology and other properties were attributed by Nageli in 1877, to bacterial pleomorphism, which postulated the existence of a single, a few species of bacteria, which possessed a protein capacity for a variation. With the development and application of precise methods of pure culture, it became apparent that different types of bacteria retained constant form and function through successive generations. This led to the concept of monomorphism.
Transformation
Transformation in bacteria was first observed in 1928 by Frederick Griffith and later (in 1944) examined at the molecular level by Oswald Avery and his colleagues who used the process to demonstrate that DNA was the genetic material of bacteria. In transformation, a cell takes up extraneous DNA found in the environment and incorporates it into its genome (genetic material) through recombination. Not all bacteria are competent to be transformed, and not all extracellu |
https://en.wikipedia.org/wiki/Quantum%20chromodynamics%20binding%20energy | Quantum chromodynamics binding energy (QCD binding energy), gluon binding energy or chromodynamic binding energy is the energy binding quarks together into hadrons. It is the energy of the field of the strong force, which is mediated by gluons. Motion-energy and interaction-energy contribute most of the hadron's mass.
Source of mass
Most of the mass of hadrons is actually QCD binding energy, through mass–energy equivalence. This phenomenon is related to chiral symmetry breaking. In the case of nucleons – protons and neutrons – QCD binding energy forms about 99% of the nucleon's mass. That is if assuming that the kinetic energy of the hadron's constituents, moving at near the speed of light, which contributes greatly to the hadron mass, is part of QCD binding energy. For protons, the sum of the rest masses of the three valence quarks (two up quarks and one down quark) is approximately , while the proton's total mass is about . For neutrons, the sum of the rest masses of the three valence quarks (two down quarks and one up quark) is approximately , while the neutron's total mass is about . Considering that nearly all of the atom's mass is concentrated in the nucleons, this means that about 99% of the mass of everyday matter (baryonic matter) is, in fact, chromodynamic binding energy.
Gluon energy
While gluons are massless, they still possess energy – chromodynamic binding energy. In this way, they are similar to photons, which are also massless particles carrying energy – photon energy. The amount of energy per single gluon, or "gluon energy", cannot be calculated. Unlike photon energy, which is quantifiable, described by the Planck–Einstein relation and depends on a single variable (the photon's frequency), no formula exists for the quantity of energy carried by each gluon. While the effects of a single photon can be observed, single gluons have not been observed outside of a hadron. Due to the mathematical complexity of quantum chromodynamics and the somewhat chao |
https://en.wikipedia.org/wiki/Boyce%E2%80%93Codd%20normal%20form | Boyce–Codd normal form (or BCNF or 3.5NF) is a normal form used in database normalization. It is a slightly stronger version of the third normal form (3NF). BCNF was developed in 1974 by Raymond F. Boyce and Edgar F. Codd to address certain types of anomalies not dealt with by 3NF as originally defined.
If a relational schema is in BCNF then all redundancy based on functional dependency has been removed, although other types of redundancy may still exist. A relational schema R is in Boyce–Codd normal form if and only if for every one of its dependencies X → Y, at least one of the following conditions hold:
X → Y is a trivial functional dependency (Y ⊆ X),
X is a superkey for schema R.
Note that if a relational schema is in BCNF, then it is in 3NF.
3NF table always meeting BCNF (Boyce–Codd normal form)
Only in rare cases does a 3NF table not meet the requirements of BCNF. A 3NF table that does not have multiple overlapping candidate keys is guaranteed to be in BCNF. Depending on what its functional dependencies are, a 3NF table with two or more overlapping candidate keys may or may not be in BCNF.
An example of a 3NF table that does not meet BCNF is:
Each row in the table represents a court booking at a tennis club. That club has one hard court (Court 1) and one grass court (Court 2)
A booking is defined by its Court and the period for which the Court is reserved
Additionally, each booking has a Rate Type associated with it. There are four distinct rate types:
SAVER, for Court 1 bookings made by members
STANDARD, for Court 1 bookings made by non-members
PREMIUM-A, for Court 2 bookings made by members
PREMIUM-B, for Court 2 bookings made by non-members
The table's superkeys are:
S1 = {Court, Start time}
S2 = {Court, End time}
S3 = {Rate type, Start time}
S4 = {Rate type, End time}
S5 = {Court, Start time, End time}
S6 = {Rate type, Start time, End time}
S7 = {Court, Rate type, Start time}
S8 = {Court, Rate type, End time}
ST = {Court, Rate type |
https://en.wikipedia.org/wiki/Homogeneous%20space | In mathematics, a homogeneous space is, very informally, a space that looks the same everywhere, as you move through it, with movement given by the action of a group. Homogeneous spaces occur in the theories of Lie groups, algebraic groups and topological groups. More precisely, a homogeneous space for a group G is a non-empty manifold or topological space X on which G acts transitively. The elements of G are called the symmetries of X. A special case of this is when the group G in question is the automorphism group of the space X – here "automorphism group" can mean isometry group, diffeomorphism group, or homeomorphism group. In this case, X is homogeneous if intuitively X looks locally the same at each point, either in the sense of isometry (rigid geometry), diffeomorphism (differential geometry), or homeomorphism (topology). Some authors insist that the action of G be faithful (non-identity elements act non-trivially), although the present article does not. Thus there is a group action of G on X which can be thought of as preserving some "geometric structure" on X, and making X into a single G-orbit.
Formal definition
Let X be a non-empty set and G a group. Then X is called a G-space if it is equipped with an action of G on X. Note that automatically G acts by automorphisms (bijections) on the set. If X in addition belongs to some category, then the elements of G are assumed to act as automorphisms in the same category. That is, the maps on X coming from elements of G preserve the structure associated with the category (for example, if X is an object in Diff then the action is required to be by diffeomorphisms). A homogeneous space is a G-space on which G acts transitively.
Succinctly, if X is an object of the category C, then the structure of a G-space is a homomorphism:
into the group of automorphisms of the object X in the category C. The pair (X, ρ) defines a homogeneous space provided ρ(G) is a transitive group of symmetries of the underlying set of X.
|
https://en.wikipedia.org/wiki/Ozogamicin | The term ozogamicin in the names of monoclonal antibodies or antibody-drug conjugates indicates that they are linked to a cytotoxic agent from the class of calicheamicins.
See also
Gemtuzumab ozogamicin
Inotuzumab ozogamicin |
https://en.wikipedia.org/wiki/Amanita%20fulva | Amanita fulva, commonly called the tawny grisette or the orange-brown ringless amanita, is a basidiomycete mushroom of the genus Amanita. It is found frequently in deciduous and coniferous forests of Europe, and possibly North America.
Taxonomy and naming
Amanita fulva was first described by Jacob Christian Schäffer in 1774. Historically, both the tawny grisette and the grisette (A. vaginata) were placed in the genus Amanitopsis due to their lack of a ring, unlike other Amanita species. However this distinction is now seen as insufficient to warrant a separate genus. Nowadays, A. fulva and similar ringless species of Amanita are placed in the section Vaginatae ss according to the classification of Bas.
Description
The cap is orange-brown, paler towards the margin, and darker (even very dark brown) in the center, up to 10 cm in diameter. It develops an umbo when expanded, and has a strongly striated margin. Its surface is smooth, slightly sticky and slippery when moist and glistens; later it may dry. The gills are free, close, and broad. The flesh is white to cream. The stem or stipe is white and smooth or powdery, sometimes tinged with orange-brown and with very fine hairs. It is slender, ringless, hollow and quite fragile, tapering towards the top; up to 15 cm tall and 1–1.5 cm in thickness. The universal veil which initially encapsulates the fruiting body is torn and develops into a white, sack-like volva with characteristic rusty-brown blemishes. The cap is usually free of volval remnants. Infrequently, roughly polygonal pieces of the veil may remain on the surface. The spores are white, 9 × 12 μm or (9.0-) 10.0 - 12.5 (-19.3) x (8.2-) 9.3 - 12.0 (-15.5) μm in size, globose; nonamyloid.
Distribution and habitat
Amanita fulva, distributed throughout Europe, occurs in a variety of forests. It is generally found with oak (Quercus), birch (Betula), spruce (Picea), pine (Pinus), chestnut (Castanea) and alder (Alnus), with which it forms mycorrhizae. It is often fo |
https://en.wikipedia.org/wiki/D-Wave%20Two | D-Wave Two (project code name Vesuvius) is the second commercially available quantum computer, and the successor to the first commercially available quantum computer, D-Wave One. Both computers were developed by Canadian company D-Wave Systems. The computers are not general purpose, but rather are designed for quantum annealing. Specifically, the computers are designed to use quantum annealing to solve a single type of problem known as quadratic unconstrained binary optimization. As of 2015, it was still debated whether large-scale entanglement takes place in D-Wave Two, and whether current or future generations of D-Wave computers will have any advantage over classical computers.
Processor
D-Wave Two has a QPU (quantum processing unit) of 512 qubits—an improvement over the D-Wave One series' QPUs of about 128 qubits The number of qubits can vary from chip to chip, due to variations in manufacturing. The increase in qubit count for the D-Wave Two was accomplished by tiling qubit pattern of the D-Wave One. This pattern, named chimera by D-Wave Systems, has a limited connectivity such that a given qubit can only interact with at most six other qubits. As with the D-Wave One, this restricted connectivity greatly limits the optimization problems that can be approached with the hardware.
Quantum computing
In March 2013, several groups of researchers at the Adiabatic Quantum Computing workshop at the Institute of Physics in London produced evidence of quantum entanglement in D-Wave CPUs. In March 2014, researchers from University College London and the University of Southern California corroborated their findings; in their tests, the D-Wave Two exhibited the quantum physics outcome that it should while not showing three different classical physics outcomes.
In May 2013, Catherine McGeoch verified that D-Wave Two finds solutions to a synthetic benchmark set of Ising spin optimization problems. Boixo et al. (2014) evidenced that the D-Wave Two performs quantum annealing, |
https://en.wikipedia.org/wiki/Approximate%20Competitive%20Equilibrium%20from%20Equal%20Incomes | Approximate Competitive Equilibrium from Equal Incomes (A-CEEI) is a procedure for fair item assignment. It was developed by Eric Budish.
Background
CEEI (Competitive Equilibrium from Equal Incomes) is a fundamental rule for fair division of divisible resources. It divides the resources according to the outcome of the following hypothetical process:
Each agent receives a single unit of fiat money. This is the Equal Incomes part of CEEI.
The agents trade freely until the market attains a Competitive Equilibrium. This is a price-vector and an allocation, such that (a) each allocated bundle is optimal to its agent given his/her income - the agent cannot purchase a better bundle with the same income, and (b) the market clears - the sum of all allocations exactly equals the initial endowment.
The equilibrium allocation is provably envy free and Pareto efficient. Moreover, when the agents have linear utility functions, the CEEI allocation can be computed efficiently.
Unfortunately, when there are indivisibilities, a CEEI does not always exist, so it cannot be used directly for fair item assignment. However, it can be approximated, and the approximation has good fairness, efficiency and strategic properties.
Assumptions
A-CEEI only assumes that the agents know how to rank bundles of items. The ranking need not be weakly additive nor even monotone.
Procedure
A-CEEI with parameters divides the resources according to the outcome of the following hypothetical process:
Approximate-EI: each agent receives an income between 1 and . The exact income of each agent can be determined randomly, or by seniority (seniors can get a slightly higher income).
Approximate-CE: a price-vector and an allocation are calculated, such that (a) each allocated bundle is optimal to its agent given its budget, and (b) the market "almost" clears: the Euclidean distance between the sum of all allocations and the initial endowment is at most .
Budish proves that, for any , there exists -CE |
https://en.wikipedia.org/wiki/Carotenoid | Carotenoids () are yellow, orange, and red organic pigments that are produced by plants and algae, as well as several bacteria, archaea, and fungi. Carotenoids give the characteristic color to pumpkins, carrots, parsnips, corn, tomatoes, canaries, flamingos, salmon, lobster, shrimp, and daffodils. Over 1,100 identified carotenoids can be further categorized into two classes xanthophylls (which contain oxygen) and carotenes (which are purely hydrocarbons and contain no oxygen).
All are derivatives of tetraterpenes, meaning that they are produced from 8 isoprene units and contain 40 carbon atoms. In general, carotenoids absorb wavelengths ranging from 400 to 550 nanometers (violet to green light). This causes the compounds to be deeply colored yellow, orange, or red. Carotenoids are the dominant pigment in autumn leaf coloration of about 15-30% of tree species, but many plant colors, especially reds and purples, are due to polyphenols.
Carotenoids serve two key roles in plants and algae: they absorb light energy for use in photosynthesis, and they provide photoprotection via non-photochemical quenching. Carotenoids that contain unsubstituted beta-ionone rings (including β-carotene, α-carotene, β-cryptoxanthin, and γ-carotene) have vitamin A activity (meaning that they can be converted to retinol). In the eye, lutein, meso-zeaxanthin, and zeaxanthin are present as macular pigments whose importance in visual function, as of 2016, remains under clinical research.
Structure and function
Carotenoids are produced by all photosynthetic organisms and are primarily used as accessory pigments to chlorophyll in the light-harvesting part of photosynthesis.
They are highly unsaturated with conjugated double bonds, which enables carotenoids to absorb light of various wavelengths. At the same time, the terminal groups regulate the polarity and properties within lipid membranes.
Most carotenoids are tetraterpenoids, regular C40 isoprenoids. Several modifications to these |
https://en.wikipedia.org/wiki/Concanavalin%20A | Concanavalin A (ConA) is a lectin (carbohydrate-binding protein) originally extracted from the jack-bean (Canavalia ensiformis). It is a member of the legume lectin family. It binds specifically to certain structures found in various sugars, glycoproteins, and glycolipids, mainly internal and nonreducing terminal α-D-mannosyl and α-D-glucosyl groups. Its physiological function in plants, however, is still unknown. ConA is a plant mitogen, and is known for its ability to stimulate mouse T-cell subsets giving rise to four functionally distinct T cell populations, including precursors to regulatory T cells; a subset of human suppressor T-cells is also sensitive to ConA. ConA was the first lectin to be available on a commercial basis, and is widely used in biology and biochemistry to characterize glycoproteins and other sugar-containing entities on the surface of various cells. It is also used to purify glycosylated macromolecules in lectin affinity chromatography, as well as to study immune regulation by various immune cells.
Structure and properties
Like most lectins, ConA is a homotetramer: each sub-unit (26.5kDa, 235 amino-acids, heavily glycated) binds a metallic atom (usually Mn2+ and a Ca2+). It has the D2 symmetry. Its tertiary structure has been elucidated, as have the molecular basis of its interactions with metals as well as its affinity for the sugars mannose and glucose are well known.
ConA binds specifically α-D-mannosyl and α-D-glucosyl residues (two hexoses differing only in the alcohol on carbon 2) in terminal position of ramified structures from B-Glycans (rich in α-mannose, or hybrid and bi-antennary glycan complexes). It has 4 binding sites, corresponding to the 4 sub-units. The molecular weight is 104-112kDa and the isoelectric point (pI) is in the range of 4.5-5.5.
ConA can also initiate cell division (mitogenesis), primarily acting on T-lymphocytes, by stimulating their energy metabolism within seconds of exposure.
Maturation process
ConA a |
https://en.wikipedia.org/wiki/Syrup%20of%20Maidenhair | Syrup of Maidenhair, or Capillaire, is a beverage. It is a syrup made from adiantum (maidenhair fern) leaves. The concentrate is sweetened with sugar or honey and is mixed with a liquid, most commonly water or milk, before drinking.
Uses
In Portugal a drink called Capilè is made of syrup of maidenhair with grated lemon zest and cold water. More modern versions uses orange flower water, water and sugar.
In 17th century Bavaria, it was added to a hot drink made from eggs, milk, and tea. In 18th century Europe, it was used in a popular milk mixed drinks.
It is an ingredient in a popular 19th-century mixed drink called Gin Punch.
See also
List of syrups |
https://en.wikipedia.org/wiki/Best%20worst%20method | Best Worst Method (BWM) is a multi-criteria decision-making (MCDM) method that was proposed by Dr. Jafar Rezaei in 2015. The method is used to evaluate a set of alternatives with respect to a set of decision criteria. The BWM is based on pairwise comparisons of the decision criteria. That is, after identifying the decision criteria by the decision-maker (DM), two criteria are selected by the DM: the best criterion and the worst criterion. The best criterion is the one that has the most important role in making the decision, while the worst criterion has the opposite role. The DM then gives his/her preferences of the best criterion over all the other criteria and also his/her preferences of all the criteria over the worst criterion using a number from a predefined scale (e.g. 1 to 9). These two sets of pairwise comparisons are used as input for an optimization problem, the optimal results of which are the weights of the criteria. The salient feature of the BWM is that it uses a structured way to generate pairwise comparisons which leads to reliable results. |
https://en.wikipedia.org/wiki/CUBRIC | The Cardiff University Brain Research Imaging Centre (CUBRIC) is a brain imaging centre, part of Cardiff University's Science and Innovation Campus in Cardiff, Wales, United Kingdom. When it expanded in 2016, it was considered the most advanced brain imaging centre in Europe.
Building
Construction
CUBRIC was established in the Cathays Park campus of Cardiff University in 2006, and moved to a new building in the Maindy Park campus in June 2016. The new building was constructed on old railway land, with the railway aiding in the delivery of the larger scanners. It cost £44,000,000, partially funded by Cardiff University, and partially by the Welsh Government. It was officially opened by Queen Elizabeth II on 7 June 2016.
Awards
The new building was awarded the title of Life Science Research Building 2017 by the UK Science Park Association. It also received the "Project of the Year" and "Design Through Innovation" awards from the Royal Institution of Chartered Surveyors, who praised it for its "precise and beautifully detailed multi-sensory design". It was also a contender for the National Eisteddfod of Wales Gold Medal for Architecture in 2017. It has been designed to create a relaxing environment volunteers, with large windows and timber structures.
Research
Cardiff University's School of Psychology created CUBRIC to facilitate interdisciplinary brain research, using multiple neuroimaging machines and laboratory techniques. The centre houses:
4 machines for magnetic resonance imaging (MRI)
magnetoencephalography (MEG)
electroencephalography (EEG) research
5 electrical brain stimulation (EBS) laboratories
10 cognitive method laboratories
the first connectome scanner outside the United States, a Siemens 3 Tesla Connectom system.
A range of cognitive neuroscience studies are being carried out at CUBRIC, covering areas such as sleep research and curiosity research. The centre aims to investigate neurological aspects of conditions such as epilepsy, Alzheim |
https://en.wikipedia.org/wiki/Screen%20reading | Screen reading is the act of reading a text on a computer screen, smartphone, e-book reader,
Discovery
Louis Émile Javal, a French ophthalmologist and founder of an ophthalmology laboratory in Paris is credited with the introduction of the term saccades into eye movement research. Javal discovered that while reading, one's eyes tend to jump across the text in saccades, and stop intermittently along each line in fixations.
Because of the lack of technology at the time, naked-eye observations were used to observe eye movement, until later in the late 19th and mid-20th century eye-tracking experiments were conducted in an attempt to discover a pattern regarding eye fixations while reading.
Research
F-Pattern
In a 1997 study conducted by Jakob Nielsen, a web usability expert who co-founded usability consulting company Nielsen Norman Group with Donald Norman, it was discovered that generally people read 25% slower on a computer screen in comparison with a printed page. The researchers state that this is only true for when reading on an older type computer screen with a low-scanrate.
In an additional study done in 2006, Nielsen also discovered that people read Web pages in an F-shaped pattern that consists of two horizontal stripes followed by a vertical stripe. He had 232 participants fitted with eye-tracking cameras to trace their eye movements as they read online texts and webpages. The findings showed that people do not read the text on webpages word-by-word, but instead generally read horizontally across the top of the webpage, then in a second horizontal movement slightly lower on the page, and lastly scan vertically down the left side of the screen.
The Software Usability Research Laboratory at Wichita State University did a subsequent study in 2007 testing eye gaze patterns while searching versus browsing a website , and the results confirmed that users appeared to follow Nielsen's ‘F’ pattern while browsing and searching through text-based pages.
A grou |
https://en.wikipedia.org/wiki/Koo%20%28social%20network%29 | Koo is an Indian microblogging and social networking service, owned by Bangalore-based Bombinate Technologies. It was co-founded by entrepreneurs Aprameya Radhakrishna and Mayank Bidawatka. The app was launched in early 2020; it won the government's Atmanirbhar App Innovation Challenge which selected the best apps from some 7,000 entries across the country.
As of November 2022, the company is valued at over $275 million. Investors in Bombinate Technologies include Tiger Global, Blume Ventures, Kalaari Capital and Accel Partners India, and former Infosys CFO TV Mohandas Pai's 3one4 Capital.
History
Initial growth
According to statistics provided by analytics provider Sensor Tower, Koo saw 2.6 million installs from Indian app stores in 2020, compared to 2.8 crore (28 million) installs observed for Twitter. From February 6 to February 11, the installations of Koo increased rapidly. The app increased in popularity after a weeklong standoff between Twitter and the Government of India over Twitter's refusal to block accounts during the 2020–2021 Indian farmers' protest. The government demanded that Twitter block the accounts of hundreds of activists, journalists, and politicians, accusing them of spreading misinformation. Twitter complied with a majority of the orders, but refused some, citing freedom of expression. Following this standoff, many Cabinet Ministers such as Piyush Goyal and various government officials moved to Koo and urged supporters to follow. This led to a surge in Koo's user base. In April 2021, Ravi Shankar Prasad became the first minister with 2.5 million followers on Koo.
Koo in Nigeria
Koo was the go-to alternative to Twitter in Nigeria after the country indefinitely banned Twitter for deleting a tweet by Nigerian President Muhammadu Buhari. The tweet had threatened a crackdown on regional separatists "in the language they understand". Twitter claimed the post was in violation of Twitter rules, but gave no further details. Twitter was official |
https://en.wikipedia.org/wiki/Window%20detector | A window detector circuit, also called window comparator circuit or dual edge limit detector circuits is used to determine whether an unknown input is between two precise reference threshold voltages. It employs two comparators to detect over-voltage or under-voltage.
Each single comparator detects the common input voltage against one of two reference voltages, normally upper and lower limits. Outputs behind a logic gate like AND detect the input as in range of the so-called "window" between upper and lower reference.
Window detectors are used in industrial alarms, level sensor and controls, digital computers and production-line testing.
Function
If Uin is greater than Urefbot and Uin is lower than Ureftop then both comparators' outputs will swing to the logical high and turn on the AND gate output.
See also
Comparator
Operational amplifier
555 timer IC |
https://en.wikipedia.org/wiki/Savage%20Pond | Savage Pond is an action pond simulation game which was written by Peter Judd for the Acorn Electron and BBC Micro, and by Gwyll Jones for the 16k versions of the Atari 8-bit family of home computers in 1983 and the Commodore 64 in 1984. It was originally released under the Starcade label and was reissued in 1985 when Argus acquired the Bug-Byte budget label.
Overview
The game is set in a pond with the player taking the role of a tadpole. The aim of the game is to build up a colony of frogs while avoiding the many hazards. The setting and characters are all quite true to life which was quite unique at the time. Most contemporary arcade games, even if not set in space, such as Frogger (with frogs that cannot swim) and Centipede (which is basically a space shoot 'em up with characters that look like insects), were far from realistic. The instructions include descriptions of all the 'cast' including their Latin names and information not relevant to the game itself. Although it may appear to be an educational game it is actually a fast-paced arcade game.
Gameplay
The game begins with Colony 1, which is a simple, peaceful pond. The tadpole character can swim around the pond eating amoeba. The only hazard in the pond are the hydra clinging to the bottom of the pond that will sting and kill the tadpole if they touch. There is also a dragonfly that occasionally flies over the pond and drops an egg. The egg can be eaten but if left to hatch, the larva will escape (but again can be eaten) and return as a nymph which will chase the tadpole until it catches and eats it or becomes exhausted and chrysalises to become another dragonfly. In order to build the frog colony, many 'evolutions' must take place, most of which increase the number of hazards. Blood worms regularly fall into the water and must be collected. After five worms are eaten, a beetle larva appears. If this is eaten quickly, the pond 'evolves'. In Colony 1 this means the introduction of jellyfish (similar to the |
https://en.wikipedia.org/wiki/Veterinary%20virology | Veterinary virology is the study of viruses in non-human animals. It is an important branch of veterinary medicine.
Rhabdoviruses
Rhabdoviruses are a diverse family of single stranded, negative sense RNA viruses that infect a wide range of hosts, from plants and insects, to fish and mammals. The Rhaboviridae family consists of six genera, two of which, cytorhabdoviruses and nucleorhabdoviruses, only infect plants. Novirhabdoviruses infect fish, and vesiculovirus, lyssavirus and ephemerovirus infect mammals, fish and invertebrates. The family includes pathogens such as rabies virus, vesicular stomatitis virus and potato yellow dwarf virus that are of public health, veterinary, and agricultural significance.
Foot-and-mouth disease virus
Foot-and-mouth disease virus (FMDV) is a member of the Aphthovirus genus in the Picornaviridae family and is the cause of foot-and-mouth disease in pigs, cattle, sheep and goats. It is a non-enveloped, positive strand, RNA virus. FMDV is a highly contagious virus. It enters the body through inhalation.
Pestiviruses
Pestiviruses have a single stranded, positive-sense RNA genomes. They cause Classical swine fever (CSF) and Bovine viral diarrhea(BVD). Mucosal disease is a distinct, chronic persistent infection, whereas BVD is an acute infection.
Arteriviruses
Arteriviruses are small, enveloped, animal viruses with an icosahedral core containing a positive-sense RNA genome. The family includes equine arteritis virus (EAV), porcine reproductive and respiratory syndrome virus (PRRSV), lactate dehydrogenase elevating virus (LDV) of mice and simian haemorrhagic fever virus (SHFV).
Coronaviruses
Coronaviruses are enveloped viruses with a positive-sense RNA genome and with a nucleocapsid of helical symmetry. They infect the upper respiratory and gastrointestinal tract of mammals and birds. They are the cause of a wide range of diseases in cats, dog, pigs, rodents, cattle and humans. Transmission is by the faecal-oral route.
Toroviruses
|
https://en.wikipedia.org/wiki/Cactus%20graph | In graph theory, a cactus (sometimes called a cactus tree) is a connected graph in which any two simple cycles have at most one vertex in common. Equivalently, it is a connected graph in which every edge belongs to at most one simple cycle, or (for nontrivial cacti) in which every block (maximal subgraph without a cut-vertex) is an edge or a cycle.
Properties
Cacti are outerplanar graphs. Every pseudotree is a cactus. A nontrivial graph is a cactus if and only if every block is either a simple cycle or a single edge.
The family of graphs in which each component is a cactus is downwardly closed under graph minor operations. This graph family may be characterized by a single forbidden minor, the four-vertex diamond graph formed by removing an edge from the complete graph K4.
Triangular cactus
A triangular cactus is a special type of cactus graph such that each cycle has length three and each edge belongs to a cycle. For instance, the friendship graphs, graphs formed from a collection of triangles joined together at a single shared vertex, are triangular cacti. As well as being cactus graphs the triangular cacti are also block graphs and locally linear graphs.
Triangular cactuses have the property that they remain connected if any matching is removed from them; for a given number of vertices, they have the fewest possible edges with this property. Every tree with an odd number of vertices may be augmented to a triangular cactus by adding edges to it,
giving a minimal augmentation with the property of remaining connected after the removal of a matching.
The largest triangular cactus in any graph may be found in polynomial time using an algorithm for the matroid parity problem. Since triangular cactus graphs are planar graphs, the largest triangular cactus can be used as an approximation to the largest planar subgraph, an important subproblem in planarization. As an approximation algorithm, this method has approximation ratio 4/9, the best known for the maximum p |
https://en.wikipedia.org/wiki/Iterator%20pattern | In object-oriented programming, the iterator pattern is a design pattern in which an iterator is used to traverse a container and access the container's elements. The iterator pattern decouples algorithms from containers; in some cases, algorithms are necessarily container-specific and thus cannot be decoupled.
For example, the hypothetical algorithm SearchForElement can be implemented generally using a specified type of iterator rather than implementing it as a container-specific algorithm. This allows SearchForElement to be used on any container that supports the required type of iterator.
Overview
The Iterator
design pattern is one of the twenty-three well-known
GoF design patterns
that describe how to solve recurring design problems to design flexible and reusable object-oriented software, that is, objects that are easier to implement, change, test, and reuse.
What problems can the Iterator design pattern solve?
The elements of an aggregate object should be accessed and traversed without exposing its representation (data structures).
New traversal operations should be defined for an aggregate object without changing its interface.
Defining access and traversal operations in the aggregate interface is inflexible because it commits the aggregate to particular access and traversal operations and makes it impossible to add new operations
later without having to change the aggregate interface.
What solution does the Iterator design pattern describe?
Define a separate (iterator) object that encapsulates accessing and traversing an aggregate object.
Clients use an iterator to access and traverse an aggregate without knowing its representation (data structures).
Different iterators can be used to access and traverse an aggregate in different ways.
New access and traversal operations can be defined independently by defining new iterators.
See also the UML class and sequence diagram below.
Definition
The essence of the Iterator Pattern is to "Provide a w |
https://en.wikipedia.org/wiki/Oceans%20Act%20of%202000 | The Oceans Act of 2000 established the United States Commission on Ocean Policy, a working group tasked with the development of what would be known as the National Oceans Report.
The objective of the report is to promote the following:
Protection of life and property;
Stewardship of ocean and coastal resources;
Protection of marine environment and prevention of marine pollution;
Enhancement of maritime commerce;
Expansion of human knowledge of the marine environment;
Investments in technologies to promote energy and food security;
Close cooperation among government agencies; and
U.S. leadership in ocean and coastal activities.
Responses from the executive branch to the commission's report are listed in a National Ocean Policy, sent to the legislative branch.
The act was passed by the United States Congress on July 25, 2000 and signed by the President a fortnight later.
The Commission
Has 16 members
U.S. House and U.S. Senate Majority nominate 8 people each and the U.S. President appoints 4 from each list
U.S. House and U.S. Senate Minority nominates 4 people each and the U.S. President appoints 2 from each
4 people self-determined by U.S. President
Chair: supervises commission staff and regulates funding.
Members must be "balanced by area of expertise and balanced geographically".
To be eligible, members must be "Representatives, knowledgeable in ocean and coastal activities, from state and local governments, ocean-related industries, academic and technical institutions, and public interest organizations involved with scientific, regulatory, economic, and environmental ocean and coastal activities." (https://web.archive.org/web/20060207190735/http://www.oceancommission.gov/documents/oceanact.html)
The Commission's report is required to include the following, as relevant to U.S. ocean and coastal activities:
an assessment of facilities (people, vessels, computers, satellites)
a review of federal activities
a review of the cumulative effect of |
https://en.wikipedia.org/wiki/Beta%20function%20%28physics%29 | In theoretical physics, specifically quantum field theory, a beta function, β(g), encodes the dependence of a coupling parameter, g, on the energy scale, μ, of a given physical process described by quantum field theory.
It is defined as
and, because of the underlying renormalization group, it has no explicit dependence on μ, so it only depends on μ implicitly through g.
This dependence on the energy scale thus specified is known as the running of the coupling parameter, a fundamental
feature of scale-dependence in quantum field theory, and its explicit computation is achievable through a variety of mathematical techniques.
Scale invariance
If the beta functions of a quantum field theory vanish, usually at particular values of the coupling parameters, then the theory is said to be scale-invariant. Almost all scale-invariant QFTs are also conformally invariant. The study of such theories is conformal field theory.
The coupling parameters of a quantum field theory can run even if the corresponding classical field theory is scale-invariant. In this case, the non-zero beta function tells us that the classical scale invariance is anomalous.
Examples
Beta functions are usually computed in some kind of approximation scheme. An example is perturbation theory, where one assumes that the coupling parameters are small. One can then make an expansion in powers of the coupling parameters and truncate the higher-order terms (also known as higher loop contributions, due to the number of loops in the corresponding Feynman graphs).
Here are some examples of beta functions computed in perturbation theory:
Quantum electrodynamics
The one-loop beta function in quantum electrodynamics (QED) is
or, equivalently,
written in terms of the fine structure constant in natural units, .
This beta function tells us that the coupling increases with increasing energy scale, and QED becomes strongly coupled at high energy. In fact, the coupling apparently becomes infinite at some fin |
https://en.wikipedia.org/wiki/Thermoanaerobacterales%20Family%20IV | Thermoanaerobacterales Family IV is a family of rod-shaped, motile bacteria in the order Thermoanaerobacterales of the phylum Bacillota. These microorganisms are spore-forming, anaerobic and moderately thermophilic (growing at optimal temperatures about 50°C) and stain Gram-positive. Because of uncertainty in the taxonomic assignment of its members, this family is referred to as "Incertae sedis". |
https://en.wikipedia.org/wiki/555%20%28number%29 | 555 (five hundred [and] fifty-five) is the natural number following 554 and preceding 556.
In mathematics
555 is a sphenic number. In base 10, it is a repdigit, and because it is divisible by the sum of its digits, it is a Harshad number. It is also a Harshad number in binary, base 11, base 13 and hexadecimal.
It is the sum of the first triplet of three-digit permutable primes in decimal:
.
It is the twenty-sixth number such that its Euler totient (288) is equal to the totient value of its sum-of-divisors: .
Telephone numbers
The NANP reserves telephone numbers in many dialing areas in the 555 local block for fictional purposes, such as 1-308-555-3485. |
https://en.wikipedia.org/wiki/Quantum%20mechanical%20scattering%20of%20photon%20and%20nucleus | In pair production, a photon creates an electron positron pair. In the process of photons scattering in air (e.g. in lightning discharges), the most important interaction is the scattering of photons at the nuclei of atoms or molecules. The full quantum mechanical process of pair production can be described by the quadruply differential cross section given here:
with
This expression can be derived by using a quantum mechanical symmetry between pair production and Bremsstrahlung. is the atomic number, the fine structure constant, the reduced Planck's constant and the speed of light. The kinetic energies of the positron and electron relate to their total energies and momenta via
Conservation of energy yields
The momentum of the virtual photon between incident photon and nucleus is:
where the directions are given via:
where is the momentum of the incident photon.
In order to analyse the relation between the photon energy and the emission angle between photon and positron, Köhn and Ebert integrated the quadruply differential cross section over and . The double differential cross section is:
with
and
This cross section can be applied in Monte Carlo simulations. An analysis of this expression shows that positrons are mainly emitted in the direction of the incident photon. |
https://en.wikipedia.org/wiki/Hydrozoning | Hydrozoning is the practice of clustering together plants with similar water requirements in an effort to conserve water. Grouping plants into hydrozones is an approach to irrigation and garden design where plants with similar water needs are grouped together. Through the practice of hydrozoning, it is possible to customize irrigation schedules for each area’s needs, improving efficiency and avoiding overwatering and underwatering certain plants and grasses.
As you move farther away from the water source, your plantings require less water. For example, drought tolerant plants such as sage or cactus would not be planted in a bluegrass lawn, but would be separated, since bluegrass has a higher water requirement.
The principal hydrozone is found in local parks and gathering places, such as urban plazas and spaces around well-used public buildings. Mixing plants with different water needs can result in over-watering of water-thrifty plants or under-watering of plants requiring regular moisture.
See also
Xeriscape |
https://en.wikipedia.org/wiki/Yevgeny%20Krinov | Yevgeny Leonidovich Krinov () (3 March 1906 – 2 January 1984), D.G.S., was a Soviet Russian astronomer and geologist, born in Otyassy () village in the Morshansky District of the Tambov Governorate of the Russian Empire. Krinov was a renowned meteorite researcher; the mineral Krinovite, discovered in 1966, was named after him.
Scientific work
From 1926 through 1930 Yevgeny Krinov worked in the meteor division of the Mineralogy Museum of the Soviet Academy of Sciences. During this period he conducted research into the Tunguska event under the supervision of Leonid Kulik. Krinov took part in the longest expedition to the Tunguska site in the years 1929–1930 as an astronomer. The data that was gathered during this expedition became the basis for his 1949 monograph (in Russian) called The Tunguska Meteorite.
In 1975, Yevgeny Krinov ordered the burning of 1500 negatives from a 1938 expedition by Leonid Kulik to the Tunguska event as part of an effort to dispose of hazardous nitrate film. Positive imprints were preserved for further studies in the Russian city of Tomsk.
Science awards
1961 - Doctor honoris causa awarded by Soviet Academy of Sciences
1971 - Leonard Medal
Legacy
A minor planet, 2887 Krinov, discovered in 1977 by Soviet astronomer Nikolai Stepanovich Chernykh, is named after him.
Selected bibliography
1947 Spectral Reflective Capacity of Natural Formations
1949 The Tunguska Meteorite (Russian)
1952 Fundamentals of Meteoritics
1959 Sikhote-Alin Iron Meteorite Shower, Vol. I (Russian)
1963 Sikhote-Alin Iron Meteorite Shower, Vol. II (Russian)
1966 Giant Meteorites |
https://en.wikipedia.org/wiki/Strong%20positional%20game | A strong positional game (also called Maker-Maker game) is a kind of positional game. Like most positional games, it is described by its set of positions () and its family of winning-sets (- a family of subsets of ). It is played by two players, called First and Second, who alternately take previously-untaken positions.
In a strong positional game, the winner is the first player who holds all the elements of a winning-set. If all positions are taken and no player wins, then it is a draw. Classic Tic-tac-toe is an example of a strong positional game.
First player advantage
In a strong positional game, Second cannot have a winning strategy. This can be proved by a strategy-stealing argument: if Second had a winning strategy, then First could have stolen it and win too, but this is impossible since there is only one winner. Therefore, for every strong-positional game there are only two options: either First has a winning strategy, or Second has a drawing strategy.
An interesting corollary is that, if a certain game does not have draw positions, then First always has a winning strategy.
Comparison to Maker-Breaker game
Every strong positional game has a variant that is a Maker-Breaker game. In that variant, only the first player ("Maker") can win by holding a winning-set. The second player ("Breaker") can win only by preventing Maker from holding a winning-set.
For fixed and , the strong-positional variant is strictly harder for the first player, since in it, he needs to both "attack" (try to get a winning-set) and "defend" (prevent the second player from getting one), while in the maker-breaker variant, the first player can focus only on "attack". Hence, every winning-strategy of First in a strong-positional game is also a winning-strategy of Maker in the corresponding maker-breaker game. The opposite is not true. For example, in the maker-breaker variant of Tic-Tac-Toe, Maker has a winning strategy, but in its strong-positional (classic) variant, Second ha |
https://en.wikipedia.org/wiki/HP%20Neoview | HP Neoview was a data warehouse and business intelligence computer server line based on the Hewlett Packard NonStop line. It acted as a database server, providing NonStop OS and NonStop SQL, but lacked the transaction processing functionality of the original NonStop systems.
The line was retired, and no longer marketed, as of January 24, 2011. |
https://en.wikipedia.org/wiki/S180 | S180 is a murine Sarcoma cancer cell line. It has been commonly used in cancer research due to its rapid growth and proliferation in mice. The cell line was initially harvested from a soft tissue tumor in a Swiss mouse. |
https://en.wikipedia.org/wiki/Universal%20Plug%20and%20Play | Universal Plug and Play (UPnP) is a set of networking protocols on the Internet Protocol (IP) that permits networked devices, such as personal computers, printers, Internet gateways, Wi-Fi access points and mobile devices, to seamlessly discover each other's presence on the network and establish functional network services. UPnP is intended primarily for residential networks without enterprise-class devices.
UPnP assumes the network runs IP and then leverages HTTP, on top of IP, in order to provide device/service description, actions, data transfer and event notification. Device search requests and advertisements are supported by running HTTP on top of UDP (port 1900) using multicast (known as HTTPMU). Responses to search requests are also sent over UDP, but are instead sent using unicast (known as HTTPU).
Conceptually, UPnP extends plug and play—a technology for dynamically attaching devices directly to a computer—to zero-configuration networking for residential and SOHO wireless networks. UPnP devices are plug and play in that, when connected to a network, they automatically establish working configurations with other devices, removing the need for users to manually configure and add devices through IP addresses.
UPnP is generally regarded as unsuitable for deployment in business settings for reasons of economy, complexity, and consistency: the multicast foundation makes it chatty, consuming too many network resources on networks with a large population of devices; the simplified access controls do not map well to complex environments; and it does not provide a uniform configuration syntax such as the CLI environments of Cisco IOS or JUNOS.
Overview
The UPnP architecture allows device-to-device networking of consumer electronics, mobile devices, personal computers, and networked home appliances. It is a distributed, open architecture protocol based on established standards such as the Internet Protocol Suite (TCP/IP), HTTP, XML, and SOAP. UPnP control points ( |
https://en.wikipedia.org/wiki/Photoconductive%20atomic%20force%20microscopy | Photoconductive atomic force microscopy (PC-AFM) is a variant of atomic force microscopy that measures photoconductivity in addition to surface forces.
Background
Multi-layer photovoltaic cells have gained popularity since mid 1980s. At the time, research was primarily focused on single-layer photovoltaic (PV) devices between two electrodes, in which PV properties rely heavily on the nature of the electrodes. In addition, single layer PV devices notoriously have a poor fill factor. This property is largely attributed to resistance that is characteristic of the organic layer. The fundamentals of pc-AFM are modifications to traditional AFM and focus on the use of pc-AFM in PV characterization. In pc-AFM the major modifications include: a second illumination laser, an inverted microscope and a neutral density filter. These components assist in the precise alignment of the illumination laser and the AFM tip within the sample. Such modifications must complement the existing principals and instrumental modules of pc-AFM so as to minimize the effect of mechanical noise and other interferences on the cantilever and sample.
The original exploration of the PV effect can be accredited to research published by Henri Becquerel in 1839. Becquerel noticed the generation of a photocurrent after illumination when he submerged platinum electrodes within an aqueous solution of either silver chloride or silver bromide. In the early 20th century, Pochettino and Volmer studied the first organic compound, anthracene, in which photoconductivity was observed. Anthracene was heavily studied due to its known crystal structure and its commercial availability in high-purity single anthracene crystals. The studies of photoconductive properties of organic dyes such as methylene blue were initiated only in the early 1960s owing to the discovery of the PV effect in these dyes. In further studies, it was determined that important biological molecules such as chlorophylls, carotenes, other porphyri |
https://en.wikipedia.org/wiki/Jordan%27s%20totient%20function | In number theory, Jordan's totient function, denoted as , where is a positive integer, is a function of a positive integer, , that equals the number of -tuples of positive integers that are less than or equal to and that together with form a coprime set of integers
Jordan's totient function is a generalization of Euler's totient function, which is the same as . The function is named after Camille Jordan.
Definition
For each positive integer , Jordan's totient function is multiplicative and may be evaluated as
, where ranges through the prime divisors of .
Properties
which may be written in the language of Dirichlet convolutions as
and via Möbius inversion as
.
Since the Dirichlet generating function of is and the Dirichlet generating function of is , the series for becomes
.
An average order of is
.
The Dedekind psi function is
,
and by inspection of the definition (recognizing that each factor in the product over the primes is a cyclotomic polynomial of ), the arithmetic functions defined by or can also be shown to be integer-valued multiplicative functions.
.
Order of matrix groups
The general linear group of matrices of order over has order
The special linear group of matrices of order over has order
The symplectic group of matrices of order over has order
The first two formulas were discovered by Jordan.
Examples
Explicit lists in the OEIS are J2 in , J3 in , J4 in , J5 in , J6 up to J10 in up to .
Multiplicative functions defined by ratios are J2(n)/J1(n) in , J3(n)/J1(n) in , J4(n)/J1(n) in , J5(n)/J1(n) in , J6(n)/J1(n) in , J7(n)/J1(n) in , J8(n)/J1(n) in , J9(n)/J1(n) in , J10(n)/J1(n) in , J11(n)/J1(n) in .
Examples of the ratios J2k(n)/Jk(n) are J4(n)/J2(n) in , J6(n)/J3(n) in , and J8(n)/J4(n) in .
Notes |
https://en.wikipedia.org/wiki/Derby%27s%20dose | Derby's dose was a form of torture used in Jamaica to punish slaves who attempted to escape or committed other offenses like stealing food on plantations that were owned or run by Thomas Thistlewood. According to Malcolm Gladwell in his 2008 book Outliers, (Thomas Thistlewood wrote about his outlandish behaviour and disturbing treatment of Jamaican slaves extensively in his 14,000 page diary) "The runaway would be beaten, and salt pickle, lime juice, and bird pepper would be rubbed into his or her open wounds. Another slave would defecate into the mouth of the miscreant [sic], who would then be gagged, with their mouth full, for four to five hours." The punishment was invented by Thomas Thistlewood, a slave overseer, and named after the slave, Derby, who was made to undergo this punishment when he was caught eating young sugar cane stalks in the field on 25 May 1756. However, historian Douglas Hall points out that "Derby's dose" was so-called because it was often administered by one of his slaves called Derby.
Thistlewood recorded this punishment as well as a further punishment of Derby in August of that same year in his diary.
On 18 November 2013 British television host Martin Bashir discredited a comparison made by U.S. politician Sarah Palin between the United States' debt to China and slavery by referring to Derby's dose. In pointing out how cruel and barbaric slavery was, Bashir used Derby's dose as an example; at the end of the segment, he finished by saying that "if anyone truly qualified for a dose of discipline from Thomas Thistlewood, [Palin] would be the outstanding candidate". He was criticized for this comment, and ultimately resigned. |
https://en.wikipedia.org/wiki/Petersen%20graph | In the mathematical field of graph theory, the Petersen graph is an undirected graph with 10 vertices and 15 edges. It is a small graph that serves as a useful example and counterexample for many problems in graph theory. The Petersen graph is named after Julius Petersen, who in 1898 constructed it to be the smallest bridgeless cubic graph with no three-edge-coloring.
Although the graph is generally credited to Petersen, it had in fact first appeared 12 years earlier, in a paper by . Kempe observed that its vertices can represent the ten lines of the Desargues configuration, and its edges represent pairs of lines that do not meet at one of the ten points of the configuration.
Donald Knuth states that the Petersen graph is "a remarkable configuration that serves as a counterexample to many optimistic predictions about what might be true for graphs in general."
The Petersen graph also makes an appearance in tropical geometry. The cone over the Petersen graph is naturally identified with the moduli space of five-pointed rational tropical curves.
Constructions
The Petersen graph is the complement of the line graph of . It is also the Kneser graph ; this means that it has one vertex for each 2-element subset of a 5-element set, and two vertices are connected by an edge if and only if the corresponding 2-element subsets are disjoint from each other. As a Kneser graph of the form it is an example of an odd graph.
Geometrically, the Petersen graph is the graph formed by the vertices and edges of the hemi-dodecahedron, that is, a dodecahedron with opposite points, lines and faces identified together.
Embeddings
The Petersen graph is nonplanar. Any nonplanar graph has as minors either the complete graph , or the complete bipartite graph , but the Petersen graph has both as minors. The minor can be formed by contracting the edges of a perfect matching, for instance the five short edges in the first picture. The minor can be formed by deleting one vertex (for instan |
https://en.wikipedia.org/wiki/Contraposition | In logic and mathematics, contraposition refers to the inference of going from a conditional statement into its logically equivalent contrapositive, and an associated proof method known as proof by contraposition. The contrapositive of a statement has its antecedent and consequent inverted and flipped.
Conditional statement . In formulas: the contrapositive of is .
If P, Then Q. — If not Q, Then not P. "If it is raining, then I wear my coat" — "If I don't wear my coat, then it isn't raining."
The law of contraposition says that a conditional statement is true if, and only if, its contrapositive is true.
The contrapositive () can be compared with three other statements:
Inversion (the inverse), "If it is not raining, then I don't wear my coat." Unlike the contrapositive, the inverse's truth value is not at all dependent on whether or not the original proposition was true, as evidenced here.
Conversion (the converse), "If I wear my coat, then it is raining." The converse is actually the contrapositive of the inverse, and so always has the same truth value as the inverse (which as stated earlier does not always share the same truth value as that of the original proposition).
Negation (the logical complement), "It is not the case that if it is raining then I wear my coat.", or equivalently, "Sometimes, when it is raining, I don't wear my coat. " If the negation is true, then the original proposition (and by extension the contrapositive) is false.
Note that if is true and one is given that is false (i.e., ), then it can logically be concluded that must be also false (i.e., ). This is often called the law of contrapositive, or the modus tollens rule of inference.
Intuitive explanation
In the Euler diagram shown, if something is in A, it must be in B as well. So we can interpret "all of A is in B" as:
It is also clear that anything that is not within B (the blue region) cannot be within A, either. This statement, which can be expressed as:
is the contrapos |
https://en.wikipedia.org/wiki/Archaeosortase | An archaeosortase is a protein that occurs in the cell membranes of some archaea. Archaeosortases recognize and remove carboxyl-terminal protein sorting signals about 25 amino acids long from secreted proteins. A genome that encodes one archaeosortase may encode over fifty target proteins. The best characterized archaeosortase target is the Haloferax volcanii S-layer glycoprotein, an extensively modified protein with O-linked glycosylations, N-linked glycosylations, and a large prenyl-derived lipid modification toward the C-terminus. Knockout of the archaeosortase A (artA) gene, or permutation of the motif Pro-Gly-Phe (PGF) to Pro-Phe-Gly in the S-layer glycoprotein, blocks attachment of the lipid moiety as well as blocking removal of the PGF-CTERM protein-sorting domain. Thus archaeosortase appears to be a transpeptidase, like sortase, rather than a simple protease.
Archaeosortases are related to exosortases, their uncharacterized counterparts in Gram-negative bacteria. The names of both families of proteins reflect roles analogous to sortases in Gram-positive bacteria, with which they share no sequence homology. The sequences of archaeosortases and exosortases consists mostly of hydrophobic transmembrane helices, which sortases lack. Archaeosortases fall into a number of distinct subtypes, each responsible for recognizing sorting signals with a different signature motif. Archaeosortase A (ArtA) recognizes the PGF-CTERM signal, ArtB recognizes VPXXXP-CTERM, AtrC recognizes PEF-CTERM, and so on; one archaeal genome may encode two different archaeosortase systems.
Invariant residues shared by all archaeosortases and exosortases include a Cys and an Arg. Replacement of either destroys catalytic activity, suggesting convergent evolution of the active site with the sortases.
In the archaeal model species Haloferax volcanii, archaeosortase A belongs to a fairly large collection of identified membrane-associated proteases, but apparently also to the smaller set of i |
https://en.wikipedia.org/wiki/Alfred%20Barnard%20Basset | Alfred Barnard Basset FRS (25 July 1854 – 5 December 1930) was a British mathematician working on algebraic geometry, electrodynamics and hydrodynamics. In fluid dynamics, the Basset force—also known as the Boussinesq–Basset force—describes history effects on the force experienced by a body in unsteady motion (relative to a viscous fluid). He also worked on Bessel functions: the term Basset function was at one time used for modified Bessel functions of the second kind but is now obsolete.
Biography
Basset graduated B.A. from Trinity college, Cambridge in 1877 as 13th wrangler and finished his M.A. in 1881. He started his career in law, but soon abandoned it to continue his mathematical research. He was elected a fellow of the Royal Society in 1889.
Books |
https://en.wikipedia.org/wiki/Sagittaria%20natans | Sagittaria natans is a species of flowering plant in the water plantain family. It is native to northern Europe and Asia and often cultivated elsewhere as an aquatic ornamental in aquaria and artificial ponds. It is widespread across much of the Russian Federation and reported also from Finland, Sweden, Mongolia, Japan, Korea, Kazakhstan and China (Heilongjiang, Jilin, Liaoning, Nei Mongol, Xinjiang).
Sagittaria natans is an aquatic plant, growing in slow-moving and stagnant water bodies such as ponds and small streams. It has floating leaves that are linear, heart-shaped or arrow-shaped. |
https://en.wikipedia.org/wiki/Large%20Hadron%20Collider | The Large Hadron Collider (LHC) is the world's largest and highest-energy particle collider. It was built by the European Organization for Nuclear Research (CERN) between 1998 and 2008 in collaboration with over 10,000 scientists and hundreds of universities and laboratories across more than 100 countries. It lies in a tunnel in circumference and as deep as beneath the France–Switzerland border near Geneva.
The first collisions were achieved in 2010 at an energy of 3.5 teraelectronvolts (TeV) per beam, about four times the previous world record. The discovery of the Higgs boson at the LHC was announced in 2012. Between 2013 and 2015, the LHC was shut down and upgraded; after those upgrades it reached 6.5 TeV per beam (13.0 TeV total collision energy). At the end of 2018, it was shut down for maintenance and further upgrades, reopening over three years later in April 2022.
The collider has four crossing points where the accelerated particles collide. Nine detectors, each designed to detect different phenomena, are positioned around the crossing points. The LHC primarily collides proton beams, but it can also accelerate beams of heavy ions, such as in lead–lead collisions and proton–lead collisions.
The LHC's goal is to allow physicists to test the predictions of different theories of particle physics, including measuring the properties of the Higgs boson, searching for the large family of new particles predicted by supersymmetric theories, and studying other unresolved questions in particle physics.
Background
The term hadron refers to subatomic composite particles composed of quarks held together by the strong force (analogous to the way that atoms and molecules are held together by the electromagnetic force). The best-known hadrons are the baryons such as protons and neutrons; hadrons also include mesons such as the pion and kaon, which were discovered during cosmic ray experiments in the late 1940s and early 1950s.
A collider is a type of a particle acce |
https://en.wikipedia.org/wiki/Alexey%20Stakhov | Alexey Petrovich Stakhov ( ; May 7, 1939 – January 25, 2021) is a Ukrainian mathematician, inventor and engineer, who has made contributions to the theory of Fibonacci numbers and the "Golden Section" and their applications in computer science and measurement theory and technology. Doctor of computer science (1972), professor (1974). Author of over 500 publications, 14 books and 65 international patents.
Biography
Born May 7, 1939, in Partizany, Kherson region, Ukraine, USSR. In 1956 graduated with honours from Rivne village high school. Same year became a student of the Mining Faculty of the Kyiv Polytechnic Institute (now the National Technical University of Ukraine “Kyiv Polytechnic Institute”). In 1959, transferred to the Radio Engineering Faculty of Kharkiv Aviation Institute (now the National Aerospace University of Ukraine). After graduation, worked for two years as an engineer in the Kharkiv Electrical Instrument Design Bureau (now the space technology company “Khartron”). "Khartron" was one of the top secret space companies of the Soviet Union. It was engaged in the research, development and manufacture of automatic control systems for missiles and space craft on board systems. Through working there Stakhov obtained thorough practical engineering experience and published his first scientific papers. Later he worked at the universities of Russia and Ukraine (Kharkiv Institute of Radio Electronics, Taganrog Radio Engineering Institute, Vinnytsia Technical University, Vinnytsia Agricultural University, Vinnytsia Pedagogic University). He was a visiting professor at many universities abroad (Austria, Germany, Libya, Mozambique). Since 2004, he lived and worked in Canada.
Teaching, research, and work
Dean of the Faculty of Computer Engineering of the Kharkiv Institute of Radio Electronics (now, Kharkiv National University of Radio Electronics), 1968–1970
Head of the Department of Informational and Measuring Engineering, Taganrog Radio Engineering Institute |
https://en.wikipedia.org/wiki/Kinetoplast | A kinetoplast is a network of circular DNA (called kDNA) inside a mitochondrion that contains many copies of the mitochondrial genome. The most common kinetoplast structure is a disk, but they have been observed in other arrangements. Kinetoplasts are only found in Excavata of the class Kinetoplastida. The variation in the structures of kinetoplasts may reflect phylogenic relationships between kinetoplastids. A kinetoplast is usually adjacent to the organism's flagellar basal body, suggesting that it is bound to some components of the cytoskeleton. In Trypanosoma brucei this cytoskeletal connection is called the tripartite attachment complex and includes the protein p166.
Trypanosoma
In trypanosomes, a group of flagellated protozoans, the kinetoplast exists as a dense granule of DNA within the mitochondrion. Trypanosoma brucei, the parasite which causes African trypanosomiasis (African sleeping sickness), is an example of a trypanosome with a kinetoplast. Its kinetoplast is easily visible in samples stained with DAPI, a fluorescent DNA stain, or by the use of fluorescent in situ hybridization (FISH) with BrdU, a thymidine analogue.
Structure
The kinetoplast contains circular DNA in two forms, maxicircles and minicircles. Maxicircles are between 20 and 40kb in size and there are a few dozen per kinetoplast. There are several thousand minicircles per kinetoplast and they are between 0.5 and 1kb in size. Maxicircles encode the typical protein products needed for the mitochondria which is encrypted. Herein lies the only known function of the minicircles - producing guide RNA (gRNA) to decode this encrypted maxicircle information, typically through the insertion or deletion of uridine residues. The network of maxicircles and minicircles are catenated to form a planar network that resembles chain mail. Reproduction of this network then requires that these rings be disconnected from the parental kinetoplast and subsequently reconnected in the daughter kinetoplast. This u |
https://en.wikipedia.org/wiki/Voodoo%205 | The Voodoo 5 was the last and most powerful graphics card line that 3dfx Interactive released. All members of the family were based upon the VSA-100 graphics processor. Only the single-chip Voodoo 4 4500 and dual-chip Voodoo 5 5500 made it to market.
Architecture and performance
The VSA-100 graphics chip is a direct descendant of "Avenger", more commonly known as Voodoo3. It was built on a 250 nm semiconductor manufacturing process, as with Voodoo3. However, the process was tweaked with a sixth metal layer to allow for better density and speed, and the transistors have a slightly shorter gate length and thinner gate oxide. VSA-100 has a transistor count of roughly 14 million, compared to Voodoo3's ~8 million. The chip has a larger texture cache than its predecessors and the data paths are 32 bits wide rather than 16-bit. Rendering calculations are 40 bits wide in VSA-100 but the operands and results are stored as 32-bit.
One of the design goals for the VSA-100 was scalability. The name of the chip is an abbreviation for "Voodoo Scalable Architecture." By using one or more VSA-100 chips on a board, the various market segments for graphics cards are satisfied with just a single graphics chip design. Theoretically, anywhere from 1 to 32 VSA-100 GPUs could be run in parallel on a single graphics card, and the fillrate of the card would increase proportionally. On cards with more than one VSA-100, the chips are linked using 3dfx's Scan-Line Interleave (SLI) technology. A major drawback to this method of performance scaling is that various parts of hardware are needlessly duplicated on the cards and board complexity increases with each additional processor.
3dfx changed the rendering pipeline from one pixel pipeline with twin texture mapping units (Voodoo2/3) to a dual pixel pipeline design with one texture mapping unit on each. This design, commonly referred to as a 2×1 configuration, has an advantage over the prior 1×2 design with the ability to always output 2 pix |
https://en.wikipedia.org/wiki/Gravitational%20field | In physics, a gravitational field or gravitational acceleration field is a vector field used to explain the influences that a body extends into the space around itself. A gravitational field is used to explain gravitational phenomena, such as the gravitational force field excerted on another massive body. It has dimension of acceleration (L/T2) and it is measured in units of newtons per kilogram (N/kg) or, equivalently, in meters per second squared (m/s2).
In its original concept, gravity was a force between point masses. Following Isaac Newton, Pierre-Simon Laplace attempted to model gravity as some kind of radiation field or fluid, and since the 19th century, explanations for gravity in classical mechanics have usually been taught in terms of a field model, rather than a point attraction. It results from the spatial gradient of the gravitational potential field.
In general relativity, rather than two particles attracting each other, the particles distort spacetime via their mass, and this distortion is what is perceived and measured as a "force". In such a model one states that matter moves in certain ways in response to the curvature of spacetime, and that there is either no gravitational force, or that gravity is a fictitious force.
Gravity is distinguished from other forces by its obedience to the equivalence principle.
Classical mechanics
In classical mechanics, a gravitational field is a physical quantity. A gravitational field can be defined using Newton's law of universal gravitation. Determined in this way, the gravitational field around a single particle of mass is a vector field consisting at every point of a vector pointing directly towards the particle. The magnitude of the field at every point is calculated by applying the universal law, and represents the force per unit mass on any object at that point in space. Because the force field is conservative, there is a scalar potential energy per unit mass, , at each point in space associated with t |
https://en.wikipedia.org/wiki/Deep%20circumflex%20iliac%20artery | The deep circumflex iliac artery (or deep iliac circumflex artery) is an artery in the pelvis that travels along the iliac crest of the pelvic bone.
Course
The deep circumflex iliac artery arises from the lateral aspect of the external iliac artery nearly opposite the origin of the inferior epigastric artery.
It ascends obliquely and laterally, posterior to the inguinal ligament, contained in a fibrous sheath formed by the junction of the transversalis fascia and iliac fascia. It travels to the anterior superior iliac spine, where it anastomoses with the ascending branch of the lateral femoral circumflex artery.
It then pierces the transversalis fascia and passes medially along the inner lip of the crest of the ilium to a point where it perforates the transversus abdominis muscle. From there, it travels posteriorly between the transversus abdominis muscle and the internal oblique muscle to anastomose with the iliolumbar artery and the superior gluteal artery.
Opposite the anterior superior iliac spine of the ilium, it gives off a large ascending branch. This branch ascends between the internal oblique muscle and the transversus abdominis muscle, supplying them, and anastomosing with the lumbar arteries and inferior epigastric artery.
The deep circumflex artery serves as the primary blood supply to the anterior iliac crest bone flap.
Additional images |
https://en.wikipedia.org/wiki/HTR3C | 5-hydroxytryptamine receptor 3C is a protein that in humans is encoded by the HTR3C gene. The protein encoded by this gene is a subunit of the 5-HT3 receptor. |
https://en.wikipedia.org/wiki/SWAT%20model | SWAT (Soil & Water Assessment Tool) is a river basin scale model developed to quantify the impact of land management practices in large, complex watersheds. SWAT is a public domain software enabled model actively supported by the USDA Agricultural Research Service at the Blackland Research & Extension Center in Temple, Texas, USA. It is a hydrology model with the following components: weather, surface runoff, return flow, percolation, evapotranspiration, transmission losses, pond and reservoir storage, crop growth and irrigation, groundwater flow, reach routing, nutrient and pesticide loading, and water transfer. SWAT can be considered a watershed hydrological transport model. This model is used worldwide and is continuously under development. As of July 2012, more than 1000 peer-reviewed articles have been published that document its various applications.
Model operation
SWAT is a continuous time model that operates on a daily time step at basin scale. The objective of such a model is to predict the long-term impacts in large basins of management and also timing of agricultural practices within a year (i.e., crop rotations, planting and harvest dates, irrigation, fertilizer, and pesticide application rates and timing).
It can be used to simulate at the basin scale water and nutrients cycle in landscapes whose dominant land use is agriculture. It can also help in assessing the environmental efficiency of best management practices and alternative management policies. SWAT uses a two-level dissagregation scheme; a preliminary subbasin identification is carried out based on topographic criteria, followed by further discretization using land use and soil type considerations. Areas with the same soil type and land use form a Hydrologic Response Unit (HRU), a basic computational unit assumed to be homogeneous in hydrologic response to land cover change.
Interfaces
ArcSWAT Iterface for ArcMap.
QSWAT Iterface for QGIS.
See also
Storm Water Management Model
Stochasti |
https://en.wikipedia.org/wiki/Complex%20regional%20pain%20syndrome | Complex regional pain syndrome (CRPS Type 1 and Type 2) is a form of amplified musculoskeletal pain syndrome (AMPS) in which pain from a physical trauma outlasts the expected recovery time. The symptoms and causes of Type 1 and 2 are the same except Type 2 is caused by a nerve injury and is typically much more painful. This type of AMPS must include a specific cause and is often accompanied by various visible changes, such as skin changes. The lack of an observed cause for the condition, or the lack of visible symptoms, creates the diagnosis of diffuse amplified pain. CRPS is not a short-term pain that will heal in time. The most excruciating part is that the pain is long-term, and likely to be for life. In fact, CRPS is known as the world’s most painful incurable condition. Having it treated by Interventional Anesthesiologists who are trained in CRPS very early usually helps a great deal. It is also referred to as the Suicide Disease because of its intense incurable pain, among the highest recorded pain on the McGill Pain Scale.
Usually starting in a limb, CRPS manifests as pain, swelling, limited range of motion, and/or changes to the skin and bones. It may initially affect one limb and then spread throughout the body. 35% of affected people report symptoms throughout their whole bodies. Two types exist: CRPS Type 1, previously referred to as Reflex Sympathetic Dystrophy, and CRPS Type 2, previously referred to as causalgia. It is possible to have both types.
Classification
The classification system in use by the International Association for the Study of Pain (IASP) divides CRPS into two types. It is recognised that people may exhibit both types of CRPS.
Signs and symptoms
Clinical features of CRPS have been found to be inflammation resulting from the release of certain pro-inflammatory chemical signals from surrounding nerve cells; hypersensitization of pain receptors; dysfunction of local vasoconstriction and vasodilation; and maladaptive neuroplastici |
https://en.wikipedia.org/wiki/San%20Patricio%20State%20Forest | San Patricio State Forest (Spanish: Bosque Estatal de San Patricio), also known as the San Patricio Urban Forest (Spanish: Bosque Urbano de San Patricio) is one of the 20 forests that make up the public forest system of Puerto Rico. This is a secondary or second-growth forest is located in the Gobernador Piñero district of San Juan, between the neighborhoods of Villa Borinquen, Caparra Heights and Borinquen Towers complex. The urban forest has entrances on Roosevelt Avenue and Ensenada Street. The forest extends to almost 70 acres and it is the smallest protected area in the Puerto Rico state forest system. One of the most distinctive features of the forest is the mogote on its northern edge which can be observed from many parts of San Juan and Guaynabo. The forest is part of the Northern Karst.
History
The site of the forest was first developed for agriculture and cattle grazing and used to be a farm called Finca San Patricio. There used to be three mogotes on the site but two were destroyed to make way for the construction of the Villa Borinquen neighborhood and the Borinquen Towers complex. The third mogote still exists and it hosts a communications tower. The rest of the land was originally acquired by the US Army with the intention of developing residences for nearby Fort Buchanan. In 1962 the city of San Juan was experiencing a population boom and urban sprawl destroyed much of the original forest areas of the region. The area where the forest is located was not spared as if it was cleared to make way for more neighborhoods, and the only part of the forest to remain relatively untouched was that around the only remaining mogote. In 1960, the army left and abandoned the residences there and from 1974 to 1999 most of those residences were demolished. At the time the land was owned by the governmental Corporation for Housing and Urban Development (Spanish: Corporación de Renovación Urbana y Vivienda) or CRUV (today known as the Puerto Rico Department of Housin |
https://en.wikipedia.org/wiki/Stresemann%27s%20bird-of-paradise | Stresemann's bird-of-paradise is a bird in the family Paradisaeidae that is an intergeneric hybrid between a Queen Carola's parotia and greater lophorina.
History
Only one female specimen is known of this hybrid, held in the Berlin Natural History Museum, coming from Mount Hunstein in the Sepik district of north-eastern New Guinea. It was first identified as a female Carola's Parotia in 1923 and later, in 1934, described as a subspecies of the Superb Bird of Paradise; it is named for its original identifier and later describer, German ornithologist Erwin Stresemann.
Notes |
https://en.wikipedia.org/wiki/Frobenius%20reciprocity | In mathematics, and in particular representation theory, Frobenius reciprocity is a theorem expressing a duality between the process of restricting and inducting. It can be used to leverage knowledge about representations of a subgroup to find and classify representations of "large" groups that contain them. It is named for Ferdinand Georg Frobenius, the inventor of the representation theory of finite groups.
Statement
Character theory
The theorem was originally stated in terms of character theory. Let be a finite group with a subgroup , let denote the restriction of a character, or more generally, class function of to , and let denote the induced class function of a given class function on . For any finite group , there is an inner product on the vector space of class functions (described in detail in the article Schur orthogonality relations). Now, for any class functions and , the following equality holds:
In other words, and are Hermitian adjoint.
Let and be class functions.
Proof. Every class function can be written as a linear combination of irreducible characters. As is a bilinear form, we can, without loss of generality, assume and to be characters of irreducible representations of in and of in respectively.
We define for all Then we have
In the course of this sequence of equations we used only the definition of induction on class functions and the properties of characters.
Alternative proof. In terms of the group algebra, i.e. by the alternative description of the induced representation, the Frobenius reciprocity is a special case of a general equation for a change of rings:
This equation is by definition equivalent to [how?]
As this bilinear form tallies the bilinear form on the corresponding characters, the theorem follows without calculation.
Module theory
As explained in the section Representation theory of finite groups#Representations, modules and the convolution algebra, the theory of the representations of a group |
https://en.wikipedia.org/wiki/Coding%20best%20practices | Coding best practices or programming best practices are a set of informal rules (best practices) that many software developers in computer programming follow to improve software quality.
Many computer programs remain in use for long periods of time, so any rules need to facilitate both initial development and subsequent maintenance and enhancement of source code by people other than the original authors.
In the ninety-ninety rule, Tom Cargill is credited with an explanation as to why programming projects often run late: "The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time." Any guidance which can redress this lack of foresight is worth considering.
The size of a project or program has a significant effect on error rates, programmer productivity, and the amount of management needed.
Software quality
As listed below, there are many attributes associated with good software. Some of these can be mutually contradictory (e.g. being very fast versus performing extensive error checking), and different customers and participants may have different priorities. Weinberg provides an example of how different goals can have a dramatic effect on both effort required and efficiency. Furthermore, he notes that programmers will generally aim to achieve any explicit goals which may be set, probably at the expense of any other quality attributes.
Sommerville has identified four generalized attributes which are not concerned with what a program does, but how well the program does it:
Maintainability
Dependability
Efficiency
Usability
Weinberg has identified four targets which a good program should meet:
Does a program meet its specification ("correct output for each possible input")?
Is the program produced on schedule (and within budget)?
How adaptable is the program to cope with changing requirements?
Is the program efficient enough for the environment in which i |
https://en.wikipedia.org/wiki/Murid%20gammaherpesvirus%207 | Murid gammaherpesvirus 7 is a species of virus in the genus Rhadinovirus, subfamily Gammaherpesvirinae, family Herpesviridae, and order Herpesvirales. |
https://en.wikipedia.org/wiki/Biological%20data%20visualization | Biology data visualization is a branch of bioinformatics concerned with the application of computer graphics, scientific visualization, and information visualization to different areas of the life sciences. This includes visualization of sequences, genomes, alignments, phylogenies, macromolecular structures, systems biology, microscopy, and magnetic resonance imaging data. Software tools used for visualizing biological data range from simple, standalone programs to complex, integrated systems.
State-of-the-art and perspectives
Today we are experiencing a rapid growth in volume and diversity of biological data, presenting an increasing challenge for biologists. A key step in understanding and learning from these data is visualization. Thus, there has been a corresponding increase in the number and diversity of systems for visualizing biological data.
An emerging trend is the blurring of boundaries between the visualization of 3D structures at atomic resolution, visualization of larger complexes by cryo-electron microscopy, and visualization of the location of proteins and complexes within whole cells and tissues.
A second emerging trend is an increase in the availability and importance of time-resolved data from systems biology, electron microscopy and cell and tissue imaging. In contrast, visualization of trajectories has long been a prominent part of molecular dynamics.
Finally, as datasets are increasing in size, complexity, and interconnectedness, biological visualization systems are improving in usability, data integration and standardization.
List of visualization software
Many software systems are available for visualization biological data. The list below links some popularly used software, and systems grouped by application areas.
Medusa - A simple tool for interaction graph analysis. It is a Java based application and available as an applet.
Cytoscape - An open source software for integrating bio-molecular interaction networks with high-throughput |
https://en.wikipedia.org/wiki/Pin%20trading | Pin trading is the practice of buying, selling, and exchanging collectible pins – most often lapel pins associated with a particular common theme, as well as related items – such as lanyards, bags, and hats to store and display the pins – as a hobby. Collectible pins used in pin trading are often found in amusement parks and resorts; the Walt Disney World and Disneyland resorts, for example, are venues where Disney pin trading has become a popular activity, and similar pin trading activities are popular at comparable venues such as SeaWorld, Universal Resorts, and at Six Flags theme parks. They are also found at events that are recurring and/or share a common theme, such as the Olympic Games and other sporting events. The pins collected and traded are often of a limited edition and thus more highly valued in pin trading, and are sometimes marked or distributed by various companies such as The Coca-Cola Company who sponsor the events and venues associated with the traded pins. Pin trading at particular venues and events is often governed by rules of etiquette particular to the venue or occasion.
Pin trading is also an annual tradition of the Pasadena Tournament of Roses Game and Parade. Participating teams, marching bands, floats, sponsors, and the parade's Grand Marshal each have their own custom pin.
Many clubs, sports teams, events, and churches trade and collect custom pins that were made specifically for their organization. Quick manufacturing processes allow these pins to be produced at a low cost and in small quantities. Collections of these pins are often worn by the collector on an article of clothing such as a hat, vest, or scarf.
Pin trading and collecting may have originated with the sport of curling as some of the oldest pins that could be described as trading pins are from curling clubs dating back to the mid nineteenth-century.
Pin trading also has a long standing history in Baseball. It is common for little leagues trade team pins during the L |
https://en.wikipedia.org/wiki/%CE%97%20set | In mathematics, an η set (eta set) is a type of totally ordered set introduced by that generalizes the order type η of the rational numbers.
Definition
If is an ordinal then an set is a totally ordered set in which for any two subsets and of cardinality less than , if every element of is less than every element of then there is some element greater than all elements of and less than all elements of .
Examples
The only non-empty countable η0 set (up to isomorphism) is the ordered set of rational numbers.
Suppose that κ = ℵα is a regular cardinal and let X be the set of all functions f from κ to {−1,0,1} such that if f(α) = 0 then f(β) = 0 for all β > α, ordered lexicographically. Then X is a ηα set. The union of all these sets is the class of surreal numbers.
A dense totally ordered set without endpoints is an ηα set if and only if it is ℵα saturated.
Properties
Any ηα set X is universal for totally ordered sets of cardinality at most ℵα, meaning that any such set can be embedded into X.
For any given ordinal α, any two ηα sets of cardinality ℵα are isomorphic (as ordered sets). An ηα set of cardinality ℵα exists if ℵα is regular and Σβ<α 2ℵβ ≤ ℵα. |
https://en.wikipedia.org/wiki/Thue%20equation | In mathematics, a Thue equation is a Diophantine equation of the form
ƒ(x,y) = r,
where ƒ is an irreducible bivariate form of degree at least 3 over the rational numbers, and r is a nonzero rational number. It is named after Axel Thue, who in 1909 proved that a Thue equation can have only finitely many solutions in integers x and y, a result known as Thue's theorem,
The Thue equation is solvable effectively: there is an explicit bound on the solutions x, y of the form where constants C1 and C2 depend only on the form ƒ. A stronger result holds: if K is the field generated by the roots of ƒ, then the equation has only finitely many solutions with x and y integers of K, and again these may be effectively determined.
Finiteness of solutions and diophantine approximation
Thue's original proof that the equation named in his honour has finitely many solutions is through the proof of what is now known as Thue's theorem: it asserts that for any algebraic number having degree and for any there exists only finitely many co-prime integers with such that . Applying this theorem allows one to almost immediately deduce the finiteness of solutions. However, Thue's proof, as well as subsequent improvements by Siegel, Dyson, and Roth were all ineffective.
Solution algorithm
Finding all solutions to a Thue equation can be achieved by a practical algorithm, which has been implemented in the following computer algebra systems:
in PARI/GP as functions thueinit() and thue().
in Magma computer algebra system as functions ThueObject() and ThueSolve().
in Mathematica through Reduce
Bounding the number of solutions
While there are several effective methods to solve Thue equations (including using Baker's method and Skolem's -adic method), these are not able to give the best theoretical bounds on the number of solutions. One may qualify an effective bound of the Thue equation by the parameters it depends on, and how "good" the dependence is.
The best result known today, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.