id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
24,343,133 | https://en.wikipedia.org/wiki/C16H25NO | {{DISPLAYTITLE:C16H25NO}}
The molecular formula C16H25NO (molar mass: 247.38 g/mol) may refer to:
Butidrine, also called hydrobutamine
5-OH-DPAT
7-OH-DPAT
8-OH-DPAT
Picenadol, an opioid analgesic drug
Molecular formulas | C16H25NO | [
"Physics",
"Chemistry"
] | 86 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,343,146 | https://en.wikipedia.org/wiki/C24H31NO | {{DISPLAYTITLE:C24H31NO}}
The molecular formula C24H31NO (molar mass: 349.51 g/mol, exact mass: 349.2406 u) may refer to:
AB-001 (1-pentyl-3-(1-adamantoyl)indole)
Abiraterone
3-Keto-5α-abiraterone
Dipipanone
Molecular formulas | C24H31NO | [
"Physics",
"Chemistry"
] | 95 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,343,585 | https://en.wikipedia.org/wiki/C25H32N2O | {{DISPLAYTITLE:C25H32N2O}}
The molecular formula C25H32N2O (molar mass: 376.53 g/mol, exact mass: 376.2515 u) may refer to:
3-Allylfentanyl
Cyclopentylfentanyl
Cysmethynil
Molecular formulas | C25H32N2O | [
"Physics",
"Chemistry"
] | 77 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,343,615 | https://en.wikipedia.org/wiki/C21H28N2OS | {{DISPLAYTITLE:C21H28N2OS}}
The molecular formula C21H28N2OS (molar mass: 356.52 g/mol, exact mass: 356.1922 u) may refer to:
α-Methylthiofentanyl
3-Methylthiofentanyl
Molecular formulas | C21H28N2OS | [
"Physics",
"Chemistry"
] | 70 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,344,062 | https://en.wikipedia.org/wiki/C25H35NO4 | {{DISPLAYTITLE:C25H35NO4}}
The molecular formula C25H35NO4 (molar mass: 413.54 g/mol, exact mass: 413.2566 u) may refer to:
Dihydroetorphine, an analgesic drug
Norbuprenorphine
Molecular formulas | C25H35NO4 | [
"Physics",
"Chemistry"
] | 72 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,344,301 | https://en.wikipedia.org/wiki/C19H24N2O | {{DISPLAYTITLE:C19H24N2O}}
The molecular formula C19H24N2O (molar mass: 296.41 g/mol) may refer to:
Eburnamine
Heyneanine
Noribogaine
Imipraminoxide
Palonosetron
RTI-171
Molecular formulas | C19H24N2O | [
"Physics",
"Chemistry"
] | 69 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,344,684 | https://en.wikipedia.org/wiki/C33H38N2O | {{DISPLAYTITLE:C33H38N2O}}
The molecular formula C33H38N2O (molar mass: 478.67 g/mol, exact mass: 478.2984 u) may refer to:
RWJ-394674
Molecular formulas | C33H38N2O | [
"Physics",
"Chemistry"
] | 65 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
30,387,760 | https://en.wikipedia.org/wiki/HumHot | HUMHOT is a database of human meiotic recombination hot spot DNA sequences.
See also
meiotic recombination
References
External links
http://www.jncasr.ac.in/humhot.
Biological databases
Human genetics
DNA repair | HumHot | [
"Biology"
] | 54 | [
"DNA repair",
"Bioinformatics",
"Molecular genetics",
"Cellular processes",
"Biological databases"
] |
30,388,594 | https://en.wikipedia.org/wiki/William%20Moffitt | William E. Moffitt (9 November 1925 – 19 December 1958) was a British quantum chemist. He died after a heart attack following a squash match. He had been thought to be one of Britain's most gifted academics.
Early life
Moffitt was born in Berlin, Germany to British parents; his father was working in Berlin on behalf of the British government. He was educated by private tuition up to the age of 11. He attended Harrow School from 1936–43. His chemistry master later said of him that "he was undoubtably the most able of a decade of gifted boys ... [and] has a profound effect on all who met him. He did more than anyone to create in the school the intellectual climate so necessary for the stimulation of young minds".
Academic career
He then studied chemistry at New College, Oxford, under an open scholarship, and graduated with first class honours. His D.Phil. supervisor, Charles Coulson, later wrote:
[his] exuberant delight in life remained with him to the end. "Moffit's method of Atoms in Molecules" will remain for many years to remind us of his remarkable ability to initiate new ways of thinking in his professional subject.
After receiving his D.Phil. for research in quantum chemistry, he joined the research staff of the British Rubber Producers Research Association.
He was made an Assistant Professor at Harvard in January 1953, and was give an A.M Honoris Causa in 1955. His colleague Edgar Bright Wilson said:
Few men had as great an impact at so early an age. The reasons are clear. Few have been endowed with such a sparkling, quick and keen intelligence, with such a capacity for spending long hours in the thorough study of fundamental subjects ... His intellectual powers were not only applied to the solution of problems but perhaps even more to their wise selection. He avoided areas where only formal solutions were attainable, with no contact with experience.
Doctoral students who were advised by Moffitt include R. Stephen Berry and S. M. Blinder.
Personal life and interests
He married Dorothy Silberman in 1956 and had a daughter, Alison in June 1958. He was a keen rugby player and enjoyed music and arts and particularly English literature. While sharing a cabin with a monk on a voyage to the UK from the US, he discussed the philosophy of religion with him in their only common language, Latin.
References
1925 births
1958 deaths
People educated at Harrow School
Alumni of New College, Oxford
Harvard University faculty
English chemists
Theoretical chemists | William Moffitt | [
"Chemistry"
] | 517 | [
"Quantum chemistry",
"Theoretical chemistry",
"Theoretical chemists",
"Physical chemists"
] |
32,975,011 | https://en.wikipedia.org/wiki/The%20Principles%20of%20Quantum%20Mechanics | The Principles of Quantum Mechanics is an influential monograph on quantum mechanics written by Paul Dirac and first published by Oxford University Press in 1930.
Dirac gives an account of quantum mechanics by "demonstrating how to construct a completely new theoretical framework from scratch"; "problems were tackled top-down, by working on the great principles, with the details left to look after themselves". It leaves classical physics behind after the first chapter, presenting the subject with a logical structure. Its 82 sections contain 785 equations with no diagrams.
Dirac is credited with developing the subject "particularly in the University of Cambridge and University of Göttingen between 1925–1927", according to Graham Farmelo. It is considered one of the most influential texts on quantum mechanics, with theoretical physicist Laurie M. Brown stating that it "set the stage, the tone, and much of the language of the quantum-mechanical revolution".
History
The first and second editions of the book were published in 1930 and 1935.
In 1947 the third edition of the book was published, in which the chapter on quantum electrodynamics was rewritten particularly with the inclusion of electron-positron creation.
In the fourth edition, 1958, the same chapter was revised, adding new sections on interpretation and applications. Later a revised fourth edition appeared in 1967.
Beginning with the third edition (1947), the mathematical descriptions of quantum states and operators were changed to use the Bra–ket notation, introduced in 1939 and largely developed by Dirac himself.
Laurie Brown wrote an article describing the book's evolution through its different editions, and Helge Kragh surveyed reviews by physicists (including Werner Heisenberg, Wolfgang Pauli, and others) from the time of Dirac's book's publication.
Contents
The principle of superposition
Dynamical variables and observables
Representations
The quantum conditions
The equations of motion
Elementary applications
Perturbation theory
Collision problems
Systems containing several similar particles
Theory of radiation
Relativistic theory of the electron
Quantum electrodynamics
See also
The Evolution of Physics (Einstein and Infeld)
The Feynman Lectures on Physics Vol. III (Feynman)
The Physical Principles of the Quantum Theory (Heisenberg)
Mathematical Foundations of Quantum Mechanics (von Neumann)
References
1930 non-fiction books
1930 in science
Monographs
Oxford University Press books
Paul Dirac
Physics textbooks
Quantum mechanics | The Principles of Quantum Mechanics | [
"Physics"
] | 477 | [
"Quantum mechanics",
"Works about quantum mechanics"
] |
27,621,133 | https://en.wikipedia.org/wiki/Fekete%20problem | In mathematics, the Fekete problem is, given a natural number N and a real s ≥ 0, to find the points x1,...,xN on the 2-sphere for which the s-energy, defined by
for s > 0 and by
for s = 0, is minimal. For s > 0, such points are called s-Fekete points, and for s = 0, logarithmic Fekete points (see ).
More generally, one can consider the same problem on the d-dimensional sphere, or on a Riemannian manifold (in which case ||xi −xj|| is replaced with the Riemannian distance between xi and xj).
The problem originated in the paper by who considered the one-dimensional, s = 0 case, answering a question of Issai Schur.
An algorithmic version of the Fekete problem is number 7 on the list of problems discussed by .
References
Mathematical analysis
Approximation theory | Fekete problem | [
"Mathematics"
] | 203 | [
"Approximation theory",
"Mathematical analysis",
"Mathematical relations",
"Approximations"
] |
1,396,924 | https://en.wikipedia.org/wiki/Closed%20monoidal%20category | In mathematics, especially in category theory, a closed monoidal category (or a monoidal closed category) is a category that is both a monoidal category and a closed category in such a way that the structures are compatible.
A classic example is the category of sets, Set, where the monoidal product of sets and is the usual cartesian product , and the internal Hom is the set of functions from to . A non-cartesian example is the category of vector spaces, K-Vect, over a field . Here the monoidal product is the usual tensor product of vector spaces, and the internal Hom is the vector space of linear maps from one vector space to another.
The internal language of closed symmetric monoidal categories is linear logic and the type system is the linear type system. Many examples of closed monoidal categories are symmetric. However, this need not always be the case, as non-symmetric monoidal categories can be encountered in category-theoretic formulations of linguistics; roughly speaking, this is because word-order in natural language matters.
Definition
A closed monoidal category is a monoidal category such that for every object the functor given by right tensoring with
has a right adjoint, written
This means that there exists a bijection, called 'currying', between the Hom-sets
that is natural in both A and C. In a different, but common notation, one would say that the functor
has a right adjoint
Equivalently, a closed monoidal category is a category equipped, for every two objects A and B, with
an object ,
a morphism ,
satisfying the following universal property: for every morphism
there exists a unique morphism
such that
It can be shown that this construction defines a functor . This functor is called the internal Hom functor, and the object is called the internal Hom of and . Many other notations are in common use for the internal Hom. When the tensor product on is the cartesian product, the usual notation is and this object is called the exponential object.
Biclosed and symmetric categories
Strictly speaking, we have defined a right closed monoidal category, since we required that right tensoring with any object has a right adjoint. In a left closed monoidal category, we instead demand that the functor of left tensoring with any object
have a right adjoint
A biclosed monoidal category is a monoidal category that is both left and right closed.
A symmetric monoidal category is left closed if and only if it is right closed. Thus we may safely speak of a 'symmetric monoidal closed category' without specifying whether it is left or right closed. In fact, the same is true more generally for braided monoidal categories: since the braiding makes naturally isomorphic to , the distinction between tensoring on the left and tensoring on the right becomes immaterial, so every right closed braided monoidal category becomes left closed in a canonical way, and vice versa.
We have described closed monoidal categories as monoidal categories with an extra property. One can equivalently define a closed monoidal category to be a closed category with an extra property. Namely, we can demand the existence of a tensor product that is left adjoint to the internal Hom functor.
In this approach, closed monoidal categories are also called monoidal closed categories.
Examples
Every cartesian closed category is a symmetric, monoidal closed category, when the monoidal structure is the cartesian product structure. The internal Hom functor is given by the exponential object .
In particular, the category of sets, Set, is a symmetric, closed monoidal category. Here the internal Hom is just the set of functions from to .
The category of modules, R-Mod over a commutative ring R is a non-cartesian, symmetric, monoidal closed category. The monoidal product is given by the tensor product of modules and the internal Hom is given by the space of R-linear maps with its natural R-module structure.
In particular, the category of vector spaces over a field is a symmetric, closed monoidal category.
Abelian groups can be regarded as Z-modules, so the category of abelian groups is also a symmetric, closed monoidal category.
A symmetric compact closed category is a symmetric monoidal closed category in which the internal Hom functor is given by . The canonical example is the category of finite-dimensional vector spaces, FdVect.
Counterexamples
The category of rings is a symmetric, monoidal category under the tensor product of rings, with serving as the unit object. This category is not closed. If it were, there would be exactly one homomorphism between any pair of rings: . The same holds for the category of R-algebras over a commutative ring R.
See also
Isbell conjugacy
References
Monoidal categories
Closed categories | Closed monoidal category | [
"Mathematics"
] | 1,009 | [
"Closed categories",
"Mathematical structures",
"Category theory",
"Monoidal categories"
] |
1,396,948 | https://en.wikipedia.org/wiki/Particle%20filter | Particle filters, also known as sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to find approximate solutions for filtering problems for nonlinear state-space systems, such as signal processing and Bayesian statistical inference. The filtering problem consists of estimating the internal states in dynamical systems when partial observations are made and random perturbations are present in the sensors as well as in the dynamical system. The objective is to compute the posterior distributions of the states of a Markov process, given the noisy and partial observations. The term "particle filters" was first coined in 1996 by Pierre Del Moral about mean-field interacting particle methods used in fluid mechanics since the beginning of the 1960s. The term "Sequential Monte Carlo" was coined by Jun S. Liu and Rong Chen in 1998.
Particle filtering uses a set of particles (also called samples) to represent the posterior distribution of a stochastic process given the noisy and/or partial observations. The state-space model can be nonlinear and the initial state and noise distributions can take any form required. Particle filter techniques provide a well-established methodology for generating samples from the required distribution without requiring assumptions about the state-space model or the state distributions. However, these methods do not perform well when applied to very high-dimensional systems.
Particle filters update their prediction in an approximate (statistical) manner. The samples from the distribution are represented by a set of particles; each particle has a likelihood weight assigned to it that represents the probability of that particle being sampled from the probability density function. Weight disparity leading to weight collapse is a common issue encountered in these filtering algorithms. However, it can be mitigated by including a resampling step before the weights become uneven. Several adaptive resampling criteria can be used including the variance of the weights and the relative entropy concerning the uniform distribution. In the resampling step, the particles with negligible weights are replaced by new particles in the proximity of the particles with higher weights.
From the statistical and probabilistic point of view, particle filters may be interpreted as mean-field particle interpretations of Feynman-Kac probability measures. These particle integration techniques were developed in molecular chemistry and computational physics by Theodore E. Harris and Herman Kahn in 1951, Marshall N. Rosenbluth and Arianna W. Rosenbluth in 1955, and more recently by Jack H. Hetherington in 1984. In computational physics, these Feynman-Kac type path particle integration methods are also used in Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods. Feynman-Kac interacting particle methods are also strongly related to mutation-selection genetic algorithms currently used in evolutionary computation to solve complex optimization problems.
The particle filter methodology is used to solve Hidden Markov Model (HMM) and nonlinear filtering problems. With the notable exception of linear-Gaussian signal-observation models (Kalman filter) or wider classes of models (Benes filter), Mireille Chaleyat-Maurel and Dominique Michel proved in 1984 that the sequence of posterior distributions of the random states of a signal, given the observations (a.k.a. optimal filter), has no finite recursion. Various other numerical methods based on fixed grid approximations, Markov Chain Monte Carlo techniques, conventional linearization, extended Kalman filters, or determining the best linear system (in the expected cost-error sense) are unable to cope with large-scale systems, unstable processes, or insufficiently smooth nonlinearities.
Particle filters and Feynman-Kac particle methodologies find application in signal and image processing, Bayesian inference, machine learning, risk analysis and rare event sampling, engineering and robotics, artificial intelligence, bioinformatics, phylogenetics, computational science, economics and mathematical finance, molecular chemistry, computational physics, pharmacokinetics, quantitative risk and insurance and other fields.
History
Heuristic-like algorithms
From a statistical and probabilistic viewpoint, particle filters belong to the class of branching/genetic type algorithms, and mean-field type interacting particle methodologies. The interpretation of these particle methods depends on the scientific discipline. In Evolutionary Computing, mean-field genetic type particle methodologies are often used as heuristic and natural search algorithms (a.k.a. Metaheuristic). In computational physics and molecular chemistry, they are used to solve Feynman-Kac path integration problems or to compute Boltzmann-Gibbs measures, top eigenvalues, and ground states of Schrödinger operators. In Biology and Genetics, they represent the evolution of a population of individuals or genes in some environment.
The origins of mean-field type evolutionary computational techniques can be traced back to 1950 and 1954 with Alan Turing's work on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey. The first trace of particle filters in statistical methodology dates back to the mid-1950s; the 'Poor Man's Monte Carlo', that was proposed by John Hammersley et al., in 1954, contained hints of the genetic type particle filtering methods used today. In 1963, Nils Aall Barricelli simulated a genetic type algorithm to mimic the ability of individuals to play a simple game. In evolutionary computing literature, genetic-type mutation-selection algorithms became popular through the seminal work of John Holland in the early 1970s, particularly his book published in 1975.
In Biology and Genetics, the Australian geneticist Alex Fraser also published in 1957 a series of papers on the genetic type simulation of artificial selection of organisms. The computer simulation of the evolution by biologists became more common in the early 1960s, and the methods were described in books by Fraser and Burnell (1970) and Crosby (1973). Fraser's simulations included all of the essential elements of modern mutation-selection genetic particle algorithms.
From the mathematical viewpoint, the conditional distribution of the random states of a signal given some partial and noisy observations is described by a Feynman-Kac probability on the random trajectories of the signal weighted by a sequence of likelihood potential functions. Quantum Monte Carlo, and more specifically Diffusion Monte Carlo methods can also be interpreted as a mean-field genetic type particle approximation of Feynman-Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984. One can also quote the earlier seminal works of Theodore E. Harris and Herman Kahn in particle physics, published in 1951, using mean-field but heuristic-like genetic methods for estimating particle transmission energies. In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.
The use of genetic particle algorithms in advanced signal processing and Bayesian inference is more recent. In January 1993, Genshiro Kitagawa developed a "Monte Carlo filter", a slightly modified version of this article appeared in 1996. In April 1993, Neil J. Gordon et al., published in their seminal work an application of genetic type algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state space or the noise of the system. Independently, the ones by Pierre Del Moral and Himilcon Carvalho, Pierre Del Moral, André Monin, and Gérard Salut on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in early 1989-1992 by P. Del Moral, J.C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on RADAR/SONAR and GPS signal processing problems.
Mathematical foundations
From 1950 to 1996, all the publications on particle filters, and genetic algorithms, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and genealogical and ancestral tree-based algorithms.
The mathematical foundations and the first rigorous analysis of these particle algorithms are due to Pierre Del Moral in 1996. The article also contains proof of the unbiased properties of a particle approximation of likelihood functions and unnormalized conditional probability measures. The unbiased particle estimator of the likelihood functions presented in this article is used today in Bayesian statistical inference.
Dan Crisan, Jessica Gaines, and Terry Lyons, as well as Pierre Del Moral, and Terry Lyons, created branching-type particle techniques with various population sizes around the end of the 1990s. P. Del Moral, A. Guionnet, and L. Miclo made more advances in this subject in 2000. Pierre Del Moral and Alice Guionnet proved the first central limit theorems in 1999, and Pierre Del Moral and Laurent Miclo proved them in 2000. The first uniform convergence results concerning the time parameter for particle filters were developed at the end of the 1990s by Pierre Del Moral and Alice Guionnet. The first rigorous analysis of genealogical tree-ased particle filter smoothers is due to P. Del Moral and L. Miclo in 2001
The theory on Feynman-Kac particle methodologies and related particle filter algorithms was developed in 2000 and 2004 in the books. These abstract probabilistic models encapsulate genetic type algorithms, particle, and bootstrap filters, interacting Kalman filters (a.k.a. Rao–Blackwellized particle filter), importance sampling and resampling style particle filter techniques, including genealogical tree-based and particle backward methodologies for solving filtering and smoothing problems. Other classes of particle filtering methodologies include genealogical tree-based models, backward Markov particle models, adaptive mean-field particle models, island-type particle models, particle Markov chain Monte Carlo methodologies, Sequential Monte Carlo samplers and Sequential Monte Carlo Approximate Bayesian Computation methods and Sequential Monte Carlo ABC based Bayesian Bootstrap.
The filtering problem
Objective
A particle filter's goal is to estimate the posterior density of state variables given observation variables. The particle filter is intended for use with a hidden Markov Model, in which the system includes both hidden and observable variables. The observable variables (observation process) are linked to the hidden variables (state-process) via a known functional form. Similarly, the probabilistic description of the dynamical system defining the evolution of the state variables is known.
A generic particle filter estimates the posterior distribution of the hidden states using the observation measurement process. With respect to a state-space such as the one below:
the filtering problem is to estimate sequentially the values of the hidden states , given the values of the observation process at any time step k.
All Bayesian estimates of follow from the posterior density . The particle filter methodology provides an approximation of these conditional probabilities using the empirical measure associated with a genetic type particle algorithm. In contrast, the Markov Chain Monte Carlo or importance sampling approach would model the full posterior .
The Signal-Observation model
Particle methods often assume and the observations can be modeled in this form:
is a Markov process on (for some ) that evolves according to the transition probability density . This model is also often written in a synthetic way as
with an initial probability density .
The observations take values in some state space on (for some ) and are conditionally independent provided that are known. In other words, each only depends on . In addition, we assume conditional distribution for given are absolutely continuous, and in a synthetic way we have
An example of system with these properties is:
where both and are mutually independent sequences with known probability density functions and g and h are known functions. These two equations can be viewed as state space equations and look similar to the state space equations for the Kalman filter. If the functions g and h in the above example are linear, and if both and are Gaussian, the Kalman filter finds the exact Bayesian filtering distribution. If not, Kalman filter-based methods are a first-order approximation (EKF) or a second-order approximation (UKF in general, but if the probability distribution is Gaussian a third-order approximation is possible).
The assumption that the initial distribution and the transitions of the Markov chain are continuous for the Lebesgue measure can be relaxed. To design a particle filter we simply need to assume that we can sample the transitions of the Markov chain and to compute the likelihood function (see for instance the genetic selection mutation description of the particle filter given below). The continuous assumption on the Markov transitions of is only used to derive in an informal (and rather abusive) way different formulae between posterior distributions using the Bayes' rule for conditional densities.
Approximate Bayesian computation models
In certain problems, the conditional distribution of observations, given the random states of the signal, may fail to have a density; the latter may be impossible or too complex to compute. In this situation, an additional level of approximation is necessitated. One strategy is to replace the signal by the Markov chain and to introduce a virtual observation of the form
for some sequence of independent random variables with known probability density functions. The central idea is to observe that
The particle filter associated with the Markov process given the partial observations is defined in terms of particles evolving in with a likelihood function given with some obvious abusive notation by . These probabilistic techniques are closely related to Approximate Bayesian Computation (ABC). In the context of particle filters, these ABC particle filtering techniques were introduced in 1998 by P. Del Moral, J. Jacod and P. Protter. They were further developed by P. Del Moral, A. Doucet and A. Jasra.
The nonlinear filtering equation
Bayes' rule for conditional probability gives:
where
Particle filters are also an approximation, but with enough particles they can be much more accurate. The nonlinear filtering equation is given by the recursion
with the convention for k = 0. The nonlinear filtering problem consists in computing these conditional distributions sequentially.
Feynman-Kac formulation
We fix a time horizon n and a sequence of observations , and for each k = 0, ..., n we set:
In this notation, for any bounded function F on the set of trajectories of from the origin k = 0 up to time k = n, we have the Feynman-Kac formula
Feynman-Kac path integration models arise in a variety of scientific disciplines, including in computational physics, biology, information theory and computer sciences. Their interpretations are dependent on the application domain. For instance, if we choose the indicator function of some subset of the state space, they represent the conditional distribution of a Markov chain given it stays in a given tube; that is, we have:
and
as soon as the normalizing constant is strictly positive.
Particle filters
A Genetic type particle algorithm
Initially, such an algorithm starts with N independent random variables with common probability density . The genetic algorithm selection-mutation transitions
mimic/approximate the updating-prediction transitions of the optimal filter evolution ():
During the selection-updating transition we sample N (conditionally) independent random variables with common (conditional) distribution
where stands for the Dirac measure at a given state a.
During the mutation-prediction transition, from each selected particle we sample independently a transition
In the above displayed formulae stands for the likelihood function evaluated at , and stands for the conditional density evaluated at .
At each time k, we have the particle approximations
and
In Genetic algorithms and Evolutionary computing community, the mutation-selection Markov chain described above is often called the genetic algorithm with proportional selection. Several branching variants, including with random population sizes have also been proposed in the articles.
Monte Carlo principles
Particle methods, like all sampling-based approaches (e.g., Markov Chain Monte Carlo), generate a set of samples that approximate the filtering density
For example, we may have N samples from the approximate posterior distribution of , where the samples are labeled with superscripts as:
Then, expectations with respect to the filtering distribution are approximated by
with
where stands for the Dirac measure at a given state a. The function f, in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some approximation error. When the approximation equation () is satisfied for any bounded function f we write
Particle filters can be interpreted as a genetic type particle algorithm evolving with mutation and selection transitions. We can keep track of the ancestral lines
of the particles . The random states , with the lower indices l=0,...,k, stands for the ancestor of the individual at level l=0,...,k. In this situation, we have the approximation formula
with the empirical measure
Here F stands for any founded function on the path space of the signal. In a more synthetic form () is equivalent to
Particle filters can be interpreted in many different ways. From the probabilistic point of view they coincide with a mean-field particle interpretation of the nonlinear filtering equation. The updating-prediction transitions of the optimal filter evolution can also be interpreted as the classical genetic type selection-mutation transitions of individuals. The sequential importance resampling technique provides another interpretation of the filtering transitions coupling importance sampling with the bootstrap resampling step. Last, but not least, particle filters can be seen as an acceptance-rejection methodology equipped with a recycling mechanism.
Mean-field particle simulation
The general probabilistic principle
The nonlinear filtering evolution can be interpreted as a dynamical system in the set of probability measures of the form where stands for some mapping from the set of probability distribution into itself. For instance, the evolution of the one-step optimal predictor
satisfies a nonlinear evolution starting with the probability distribution . One of the simplest ways to approximate these probability measures is to start with N independent random variables with common probability distribution . Suppose we have defined a sequence of N random variables such that
At the next step we sample N (conditionally) independent random variables with common law .
A particle interpretation of the filtering equation
We illustrate this mean-field particle principle in the context of the evolution of the one step optimal predictors
For k = 0 we use the convention .
By the law of large numbers, we have
in the sense that
for any bounded function . We further assume that we have constructed a sequence of particles at some rank k such that
in the sense that for any bounded function we have
In this situation, replacing by the empirical measure in the evolution equation of the one-step optimal filter stated in () we find that
Notice that the right hand side in the above formula is a weighted probability mixture
where stands for the density evaluated at , and stands for the density evaluated at for
Then, we sample N independent random variable with common probability density so that
Iterating this procedure, we design a Markov chain such that
Notice that the optimal filter is approximated at each time step k using the Bayes' formulae
The terminology "mean-field approximation" comes from the fact that we replace at each time step the probability measure by the empirical approximation . The mean-field particle approximation of the filtering problem is far from being unique. Several strategies are developed in the books.
Some convergence results
The analysis of the convergence of particle filters was started in 1996 and in 2000 in the book and the series of articles. More recent developments can be found in the books, When the filtering equation is stable (in the sense that it corrects any erroneous initial condition), the bias and the variance of the particle particle estimates
are controlled by the non asymptotic uniform estimates
for any function f bounded by 1, and for some finite constants In addition, for any :
for some finite constants related to the asymptotic bias and variance of the particle estimate, and some finite constant c. The same results are satisfied if we replace the one step optimal predictor by the optimal filter approximation.
Genealogical trees and Unbiasedness properties
Genealogical tree based particle smoothing
Tracing back in time the ancestral lines
of the individuals and at every time step k, we also have the particle approximations
These empirical approximations are equivalent to the particle integral approximations
for any bounded function F on the random trajectories of the signal. As shown in the evolution of the genealogical tree coincides with a mean-field particle interpretation of the evolution equations associated with the posterior densities of the signal trajectories. For more details on these path space models, we refer to the books.
Unbiased particle estimates of likelihood functions
We use the product formula
with
and the conventions and for k = 0. Replacing by the empirical approximation
in the above displayed formula, we design the following unbiased particle approximation of the likelihood function
with
where stands for the density evaluated at . The design of this particle estimate and the unbiasedness property has been proved in 1996 in the article. Refined variance estimates can be found in and.
Backward particle smoothers
Using Bayes' rule, we have the formula
Notice that
This implies that
Replacing the one-step optimal predictors by the particle empirical measures
we find that
We conclude that
with the backward particle approximation
The probability measure
is the probability of the random paths of a Markov chain running backward in time from time k=n to time k=0, and evolving at each time step k in the state space associated with the population of particles
Initially (at time k=n) the chain chooses randomly a state with the distribution
From time k to the time (k-1), the chain starting at some state for some at time k moves at time (k-1) to a random state chosen with the discrete weighted probability
In the above displayed formula, stands for the conditional distribution evaluated at . In the same vein, and stand for the conditional densities and evaluated at and These models allows to reduce integration with respect to the densities in terms of matrix operations with respect to the Markov transitions of the chain described above. For instance, for any function we have the particle estimates
where
This also shows that if
then
Some convergence results
We shall assume that filtering equation is stable, in the sense that it corrects any erroneous initial condition.
In this situation, the particle approximations of the likelihood functions are unbiased and the relative variance is controlled by
for some finite constant c. In addition, for any :
for some finite constants related to the asymptotic bias and variance of the particle estimate, and for some finite constant c.
The bias and the variance of the particle particle estimates based on the ancestral lines of the genealogical trees
are controlled by the non asymptotic uniform estimates
for any function F bounded by 1, and for some finite constants In addition, for any :
for some finite constants related to the asymptotic bias and variance of the particle estimate, and for some finite constant c. The same type of bias and variance estimates hold for the backward particle smoothers. For additive functionals of the form
with
with functions bounded by 1, we have
and
for some finite constants More refined estimates including exponentially small probability of errors are developed in.
Sequential Importance Resampling (SIR)
Monte Carlo filter and bootstrap filter
Sequential importance Resampling (SIR), Monte Carlo filtering (Kitagawa 1993), bootstrap filtering algorithm (Gordon et al. 1993) and single distribution resampling (Bejuri W.M.Y.B et al. 2017), are also commonly applied filtering algorithms, which approximate the filtering probability density by a weighted set of N samples
The importance weights are approximations to the relative posterior probabilities (or densities) of the samples such that
Sequential importance sampling (SIS) is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function f can be approximated as a weighted average
For a finite set of samples, the algorithm performance is dependent on the choice of the proposal distribution
.
The "optimal" proposal distribution is given as the target distribution
This particular choice of proposal transition has been proposed by P. Del Moral in 1996 and 1998. When it is difficult to sample transitions according to the distribution one natural strategy is to use the following particle approximation
with the empirical approximation
associated with N (or any other large number of samples) independent random samples with the conditional distribution of the random state given . The consistency of the resulting particle filter of this approximation and other extensions are developed in. In the above display stands for the Dirac measure at a given state a.
However, the transition prior probability distribution is often used as importance function, since it is easier to draw particles (or samples) and perform subsequent importance weight calculations:
Sequential Importance Resampling (SIR) filters with transition prior probability distribution as importance function are commonly known as bootstrap filter and condensation algorithm.
Resampling is used to avoid the problem of the degeneracy of the algorithm, that is, avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. The stratified sampling proposed by Kitagawa (1993) is optimal in terms of variance.
A single step of sequential importance resampling is as follows:
1) For draw samples from the proposal distribution
2) For update the importance weights up to a normalizing constant:
Note that when we use the transition prior probability distribution as the importance function,
this simplifies to the following :
3) For compute the normalized importance weights:
4) Compute an estimate of the effective number of particles as
This criterion reflects the variance of the weights. Other criteria can be found in the article, including their rigorous analysis and central limit theorems.
5) If the effective number of particles is less than a given threshold , then perform resampling:
a) Draw N particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.
b) For set
The term "Sampling Importance Resampling" is also sometimes used when referring to SIR filters, but the term Importance Resampling is more accurate because the word "resampling" implies that the initial sampling has already been done.
Sequential importance sampling (SIS)
Is the same as sequential importance resampling, but without the resampling stage.
"Direct version" algorithm
The "direct version" algorithm is rather simple (compared to other particle filtering algorithms) and it uses composition and rejection. To generate a single sample x at k from :
1) Set n = 0 (This will count the number of particles generated so far)
2) Uniformly choose an index i from the range
3) Generate a test from the distribution with
4) Generate the probability of using from where is the measured value
5) Generate another uniform u from where
6) Compare u and
6a) If u is larger then repeat from step 2
6b) If u is smaller then save as and increment n
7) If n == N then quit
The goal is to generate P "particles" at k using only the particles from . This requires that a Markov equation can be written (and computed) to generate a based only upon . This algorithm uses the composition of the P particles from to generate a particle at k and repeats (steps 2–6) until P particles are generated at k.
This can be more easily visualized if x is viewed as a two-dimensional array. One dimension is k and the other dimension is the particle number. For example, would be the ith particle at and can also be written (as done above in the algorithm). Step 3 generates a potential based on a randomly chosen particle () at time and rejects or accepts it in step 6. In other words, the values are generated using the previously generated .
Applications
Particle filters and Feynman-Kac particle methodologies find application in several contexts, as an effective mean for tackling noisy observations or strong nonlinearities, such as:
Bayesian inference, machine learning, risk analysis and rare event sampling
Bioinformatics
Computational science
Economics, financial mathematics and mathematical finance: particle filters can perform simulations which are needed to compute the high-dimensional and/or complex integrals related to problems such as dynamic stochastic general equilibrium models in macro-economics and option pricing
Engineering
Infectious disease epidemiology where they have been applied to a number of epidemic forecasting problems, for example predicting seasonal influenza epidemics
Fault detection and isolation: in observer-based schemas a particle filter can forecast expected sensors output enabling fault isolation
Molecular chemistry and computational physics
Pharmacokinetics
Phylogenetics
Robotics, artificial intelligence: Monte Carlo localization is a de facto standard in mobile robot localization
Signal and image processing: visual localization, tracking, feature recognition
Other particle filters
Auxiliary particle filter
Cost Reference particle filter
Exponential Natural Particle Filter
Feynman-Kac and mean-field particle methodologies
Gaussian particle filter
Gauss–Hermite particle filter
Hierarchical/Scalable particle filter
Nudged particle filter
Particle Markov-Chain Monte-Carlo, see e.g. pseudo-marginal Metropolis–Hastings algorithm.
Rao–Blackwellized particle filter
Regularized auxiliary particle filter
Rejection-sampling based optimal particle filter
Unscented particle filter
See also
Ensemble Kalman filter
Generalized filtering
Genetic algorithm
Mean-field particle methods
Monte Carlo localization
Moving horizon estimation
Recursive Bayesian estimation
References
Bibliography
Del Moral, Pierre (2004). Feynman-Kac formulae. Genealogical and interacting particle approximations. Springer. p. 575. "Series: Probability and Applications"
Del Moral, Pierre (2013). Mean field simulation for Monte Carlo integration. Chapman & Hall/CRC Press. p. 626. "Monographs on Statistics & Applied Probability"
External links
Feynman–Kac models and interacting particle algorithms (a.k.a. Particle Filtering) Theoretical aspects and a list of application domains of particle filters
Sequential Monte Carlo Methods (Particle Filtering) homepage on University of Cambridge
Dieter Fox's MCL Animations
Rob Hess' free software
SMCTC: A Template Class for Implementing SMC algorithms in C++
Java applet on particle filtering
vSMC : Vectorized Sequential Monte Carlo
Particle filter explained in the context of self driving car
Monte Carlo methods
Computational statistics
Nonlinear filters
Robot control
Statistical mechanics
Sampling techniques
Stochastic simulation | Particle filter | [
"Physics",
"Mathematics",
"Engineering"
] | 6,360 | [
"Robotics engineering",
"Monte Carlo methods",
"Applied mathematics",
"Control theory",
"Computational mathematics",
"Computational physics",
"Robot control",
"Computational statistics",
"Statistical mechanics",
"Dynamical systems"
] |
1,397,202 | https://en.wikipedia.org/wiki/Sodium%20chlorate | Sodium chlorate is an inorganic compound with the chemical formula NaClO3. It is a white crystalline powder that is readily soluble in water. It is hygroscopic. It decomposes above 300 °C to release oxygen and leaves sodium chloride. Several hundred million tons are produced annually, mainly for applications in bleaching pulp to produce high brightness paper.
Synthesis
Industrially, sodium chlorate is produced by the electrolysis of concentrated sodium chloride solutions. All other processes are obsolete. The sodium chlorate process is not to be confused with the chloralkali process, which is an industrial process for the electrolytic production of sodium hydroxide and chlorine gas.
The overall reaction can be simplified to the equation:
First, chloride is oxidised to form intermediate hypochlorite, ClO−, which undergoes further oxidation to chlorate along two competing reaction paths: (1) Anodic chlorate formation at the boundary layer between the electrolyte and the anode, and (2) Autoxidation of hypochlorite in the bulk electrolyte.
Under electrolysis hydrogen and sodium hydroxide are formed at the cathode and chloride ions are discharged at the anode (mixed metal oxide electrode is often used). The evolved chlorine does not escape as a gas but undergoes hydrolysis:
The hydrolysis of chlorine is considered to be fast. The formation of H+ ions should make the boundary layer at the anode strongly acidic and this is observed at low chloride concentrations. However, large concentrations of chloride, as they occur in industrial chlorate cells, shift the hydrolysis equilibrium to the left. At the boundary layer the concentration of H+ is not high enough to permit diffusion into the bulk electrolyte. Therefore hydrogen is transported away from the anode mostly as hypochlorous acid rather than H+. The hypochlorous acid dissociates in the bulk electrolyte where the pH is high and the hypochlorite ion diffuses back to the anode. More than two thirds of the hypochlorite is consumed by buffering before reaching the anode. The remainder is discharged at the anode to form chlorate and oxygen:
The autoxidation of hypochlorous acid in the bulk electrolyte proceeds according to the simplified overall equation:
It is preceded by the dissociation of a part of the hypochlorous acid involved:
The reaction requires a certain distance from the anode to occur to a significant degree, where the electrolyte is sufficiently buffered by the hydroxyl formed at the cathode. The hypochlorite then reacts with the rest of the acid:
In addition to anode distance the autoxidation also depends on temperature and pH. A typical cell operates at temperatures between 80 °C and 90 °C and at a pH of 6.1–6.4.
Independent of the reaction route the discharge of 6 mol of chloride is required to yield 1 mol of chlorate. However, the anodic oxidation route requires 50% additional electric energy. Therefore, industrial cells are optimised to favour autoxidation. Chlorate formation at the anode is treated as a loss reaction and is minimised by design.
Other loss reactions also decrease the current efficiency and must be suppressed in industrial systems. The main loss occurs by the back reduction of hypochlorite at the cathode. The reaction is suppressed by the addition of a small amount of dichromate (1–5 g/L) to the electrolyte. A porous film of chromium hydroxide is formed by cathodic deposition. The film impedes the diffusion of anions to the cathode, whereas the access of cations and their reduction is facilitated. The film stops growing on its own after it reaches a certain thickness.
Uses
The main commercial use for sodium chlorate is for making chlorine dioxide (ClO2). The largest application of ClO2, which accounts for about 95% of the use of chlorate, is in bleaching of pulp. All other, less important chlorates are derived from sodium chlorate, usually by salt metathesis with the corresponding chloride. All perchlorate compounds are produced industrially by the oxidation of solutions of sodium chlorate by electrolysis.
Herbicides
Sodium chlorate is used as a non-selective herbicide. It is considered phytotoxic to all green plant parts. It can also kill through root absorption.
Sodium chlorate may be used to control a variety of plants including morning glory, canada thistle, johnson grass, bamboo, ragwort, and St John's wort. The herbicide is mainly used on non-crop land for spot treatment and for total vegetation control on areas including roadsides, fenceways, and ditches. Sodium chlorate is also used as a defoliant and desiccant for:
Corn
Cotton
Dry beans
Flax
Grain sorghum
Peppers
Rice
Safflower
Southern peas
Soybeans
Sunflowers
If used in combination with atrazine, it increases the persistence of the effect. If used in combination with 2,4-D, performance is improved. Sodium chlorate has a soil sterilant effect. Mixing with other herbicides in aqueous solution is possible to some extent, so long as they are not susceptible to oxidation.
The sale of sodium chlorate as a weedkiller was banned in the European Union in 2009 citing health dangers, with existing stocks to be used within the following year.
Chemical oxygen generation
Chemical oxygen generators, such as those in commercial aircraft, provide emergency oxygen to passengers to protect them from drops in cabin pressure. Oxygen is generated by high-temperature decomposition of sodium chlorate:
Heat required to initiate this reaction is generated by oxidation of a small amount of iron powder mixed with the sodium chlorate, and the reaction consumes less oxygen than is produced. Barium peroxide (BaO2) is used to absorb the chlorine that is a minor product in the decomposition.
An ignitor charge is activated by pulling on the emergency mask. Similarly, the Solidox welding system used pellets of sodium chlorate mixed with combustible fibers to generate oxygen.
Oxygenless combustion
Sodium chlorate can be mixed with sucrose sugar to make a highly energetic fuel, similar to that of gunpowder, that burns in airtight spaces. This is the reaction:
However this sodium chlorate is mostly replaced by potassium chlorate.
Organic synthesis
Sodium chlorate can be used with hydrochloric acid (or also sulfuric acid and sodium chloride, the reaction of which generates HCl) to chlorinate aromatic compounds without the use of organic solvents. In this case its function is to oxidize the HCl to obtain either HOCl or Cl2 (depending upon the pH) in-situ which are the active chlorinating agents.
When combined with a vanadium pentoxide catalyst, it serves as an oxidant for a variety of organic compounds. Examples include the oxidation of hydroquinone to quinone, and of furfural to a mixture of maleic and fumaric acid.
Toxicity in humans
Sodium chlorate is toxic: "doses of a few grams of chlorate are lethal". (ld50 oral in rats 1200mg/kg) The oxidative effect on hemoglobin leads to methaemoglobin formation, which is followed by denaturation of the globin protein and a cross-linking of erythrocyte membrane proteins with resultant damage to the membrane enzymes. This leads to increased permeability of the membrane, and severe hemolysis. The denaturation of hemoglobin overwhelms the capacity of the G6PD metabolic pathway. In addition, this enzyme is directly denatured by chlorate.
Acute severe hemolysis results, with multi-organ failure, including DIC and kidney failure. In addition there is a direct toxicity to the proximal renal tubule. The treatment will consist of exchange transfusion, peritoneal dialysis or hemodialysis.
Formulations
Sodium chlorate comes in dust, spray and granule formulations. Mixtures of chlorates and organic compounds pose a severe risk of explosions
Marketed formulations contain a fire retardant. Most commercially available chlorate weedkillers contain approximately 53% sodium chlorate with the balance being a fire depressant such as sodium metaborate or ammonium phosphates.
Trade names
Sodium chlorate is the active ingredient in a variety of commercial herbicides. Some trade names for products containing sodium chlorate include Atlacide, Defol, De-Fol-Ate, Drop-Leaf, Fall, Harvest-Aid, Kusatol, Leafex, and Tumbleaf. The compound may be used in combination with other herbicides such as atrazine, 2,4-D, bromacil, diuron, and sodium metaborate.
Sodium chlorate was an extensively used weed killer within the EU, until 2009 when it was withdrawn after a decision made under terms of EU Regulations. Its use as a herbicide outside the EU remains unaffected, as does its use in other non-herbicidal applications, such as in the production of chlorine dioxide biocides and for pulp and paper bleaching.
Cultural references
Historian James Watson of Massey University in New Zealand wrote a widely reported article, "The Significance of Mr. Richard Buckley's Exploding Trousers" about accidents with sodium chlorate when used as a herbicide to control ragwort in the 1930s. This later won him an Ig Nobel Prize in 2005, and was the basis for the May 2006 "Exploding Pants" episode of MythBusters.
See also
Sodium chloride
References
Further reading
"Chlorate de potassium. Chlorate de sodium", Fiche toxicol. n° 217, Paris:Institut national de recherche et de sécurité, 2000. 4pp.
External links
International Chemical Safety Card 1117
Chlorates
Sodium compounds
Desiccants
Pyrotechnic oxidizers
Oxidizing agents | Sodium chlorate | [
"Physics",
"Chemistry"
] | 2,123 | [
"Redox",
"Chlorates",
"Oxidizing agents",
"Salts",
"Desiccants",
"Materials",
"Matter"
] |
1,397,700 | https://en.wikipedia.org/wiki/Rectilinear%20propagation | Rectilinear propagation describes the tendency of electromagnetic waves (light) to travel in a straight line. Light does not deviate when travelling through a homogeneous medium, which has the same refractive index throughout; otherwise, light experiences refraction. Even though a wave front may be bent, (e.g. the waves created by a rock hitting a pond) the individual rays are moving in straight lines. Rectilinear propagation was discovered by Pierre de Fermat.
Rectilinear propagation is only an approximation. The rectilinear approximation is only valid for short distances, in reality light is a wave and have a tendency to spread out over time. The distances for which the approximation is valid depends on the wavelength and the setting being considered. For everyday usages, it remains valid as long as the refractive index in the medium is constant.
The more general theory for how light behaves is described by Maxwell's equations.
Proof
Take three cardboard A, B and C, of the same size. Make a pin hole at the centre of each of three cardboard. Place the cardboard in the upright position, such that the holes in A, B and C are in the same straight line, in the order. Place a luminous source like a candle near the cardboard A and look through the hole in the cardboard C. We can see the candle flame. This implies that light rays travel along a straight line ABC, and hence, candle flame is visible. When one of the cardboard is slightly displaced, candle light would not be visible. It means that the light emitted by the candle is unable to bend and reach observers eye. This proves that light travels along a straight path, as well proving the rectilinear propagation of light.
See also
Diffraction
Plane wave
References
Waves | Rectilinear propagation | [
"Physics",
"Materials_science"
] | 363 | [
"Physical phenomena",
"Materials science stubs",
"Waves",
"Motion (physics)",
"Electromagnetism stubs"
] |
1,398,187 | https://en.wikipedia.org/wiki/Cytoplasmic%20hybrid | A cytoplasmic hybrid (or cybrid, a portmanteau of the two words) is a eukaryotic cell line produced by the fusion of a whole cell with a cytoplast. Cytoplasts are enucleated cells. This enucleation can be effected by simultaneous application of centrifugal force and treatment of the cell with an agent that disrupts the cytoskeleton. A special case of cybrid formation involves the use of rho-zero cells as the whole cell partner in the fusion. Rho-zero cells are cells which have been depleted of their own mitochondrial DNA by prolonged incubation with ethidium bromide, a chemical which inhibits mitochondrial DNA replication. The rho-zero cells do retain mitochondria and can grow in rich culture medium with certain supplements. They do retain their own nuclear genome. A cybrid is then a hybrid cell which mixes the nuclear genes from one cell with the mitochondrial genes from another cell. Using this powerful tool, it makes it possible to dissociate contribution from the mitochondrial genes vs that of the nuclear genes.
Cybrids are valuable in mitochondrial research and have been used to provide suggestive evidence of mitochondrial involvement in Alzheimer's disease, Parkinson's disease, and other conditions.
Legal status in United Kingdom
Research utilizing cybrid embryos has been hotly contested due to the ethical implications of further cybrid research. In 2008, the House of Lords passed the Human Fertilisation and Embryology Act 2008, which allows the creation of mixed human-animal embryos for medical purposes only. Such cybrids are 99.9% human and 0.1% animal. A cybrid may be kept for a maximum of 14 days, owing to the development of the brain and spinal cord, after which time the cybrid must be destroyed. During the two-week period, stem cells may be harvested from the cybrid, for research or medical purposes. Under no circumstances may a cybrid be implanted into a human uterus.
References
Further reading
Human Fertilisation and Embryology Act at the Wellcome Trust
External links
The Human Fertilisation and Embryology Act 2008, as amended from the National Archives.
The Human Fertilisation and Embryology Act 2008, as originally enacted from the National Archives.
Explanatory notes to the Human Fertilisation and Embryology Act 2008.
Eukaryote biology
Articles containing video clips | Cytoplasmic hybrid | [
"Biology"
] | 519 | [
"Eukaryotes",
"Eukaryote biology"
] |
1,398,487 | https://en.wikipedia.org/wiki/Super-resolution%20imaging | Super-resolution imaging (SR) is a class of techniques that enhance (increase) the resolution of an imaging system. In optical SR the diffraction limit of systems is transcended, while in geometrical SR the resolution of digital imaging sensors is enhanced.
In some radar and sonar imaging applications (e.g. magnetic resonance imaging (MRI), high-resolution computed tomography), subspace decomposition-based methods (e.g. MUSIC) and compressed sensing-based algorithms (e.g., SAMV) are employed to achieve SR over standard periodogram algorithm.
Super-resolution imaging techniques are used in general image processing and in super-resolution microscopy.
Basic concepts
Because some of the ideas surrounding super-resolution raise fundamental issues, there is need at the outset to examine the relevant physical and information-theoretical principles:
Diffraction limit: The detail of a physical object that an optical instrument can reproduce in an image has limits that are mandated by laws of physics, whether formulated by the diffraction equations in the wave theory of light or equivalently the uncertainty principle for photons in quantum mechanics. Information transfer can never be increased beyond this boundary, but packets outside the limits can be cleverly swapped for (or multiplexed with) some inside it. One does not so much “break” as “run around” the diffraction limit. New procedures probing electro-magnetic disturbances at the molecular level (in the so-called near field) remain fully consistent with Maxwell's equations.
Spatial-frequency domain: A succinct expression of the diffraction limit is given in the spatial-frequency domain. In Fourier optics light distributions are expressed as superpositions of a series of grating light patterns in a range of fringe widths, technically spatial frequencies. It is generally taught that diffraction theory stipulates an upper limit, the cut-off spatial-frequency, beyond which pattern elements fail to be transferred into the optical image, i.e., are not resolved. But in fact what is set by diffraction theory is the width of the passband, not a fixed upper limit. No laws of physics are broken when a spatial frequency band beyond the cut-off spatial frequency is swapped for one inside it: this has long been implemented in dark-field microscopy. Nor are information-theoretical rules broken when superimposing several bands, disentangling them in the received image needs assumptions of object invariance during multiple exposures, i.e., the substitution of one kind of uncertainty for another.
Information: When the term super-resolution is used in techniques of inferring object details from statistical treatment of the image within standard resolution limits, for example, averaging multiple exposures, it involves an exchange of one kind of information (extracting signal from noise) for another (the assumption that the target has remained invariant).
Resolution and localization: True resolution involves the distinction of whether a target, e.g. a star or a spectral line, is single or double, ordinarily requiring separable peaks in the image. When a target is known to be single, its location can be determined with higher precision than the image width by finding the centroid (center of gravity) of its image light distribution. The word ultra-resolution had been proposed for this process but it did not catch on, and the high-precision localization procedure is typically referred to as super-resolution.
The technical achievements of enhancing the performance of imaging-forming and –sensing devices now classified as super-resolution use to the fullest but always stay within the bounds imposed by the laws of physics and information theory.
Techniques
Optical or diffractive super-resolution
Substituting spatial-frequency bands: Though the bandwidth allowable by diffraction is fixed, it can be positioned anywhere in the spatial-frequency spectrum. Dark-field illumination in microscopy is an example. See also aperture synthesis.
Multiplexing spatial-frequency bands
An image is formed using the normal passband of the optical device. Then some known light structure, for example a set of light fringes that need not even be within the passband, is superimposed on the target. The image now contains components resulting from the combination of the target and the superimposed light structure, e.g. moiré fringes, and carries information about target detail which simple unstructured illumination does not. The “superresolved” components, however, need disentangling to be revealed. For an example, see structured illumination (figure to left).
Multiple parameter use within traditional diffraction limit
If a target has no special polarization or wavelength properties, two polarization states or non-overlapping wavelength regions can be used to encode target details, one in a spatial-frequency band inside the cut-off limit the other beyond it. Both would use normal passband transmission but are then separately decoded to reconstitute target structure with extended resolution.
Probing near-field electromagnetic disturbance
The usual discussion of super-resolution involved conventional imagery of an object by an optical system. But modern technology allows probing the electromagnetic disturbance within molecular distances of the source which has superior resolution properties, see also evanescent waves and the development of the new super lens.
Geometrical or image-processing super-resolution
Multi-exposure image noise reduction
When an image is degraded by noise, there can be more detail in the average of many exposures, even within the diffraction limit. See example on the right.
Single-frame deblurring
Known defects in a given imaging situation, such as defocus or aberrations, can sometimes be mitigated in whole or in part by suitable spatial-frequency filtering of even a single image. Such procedures all stay within the diffraction-mandated passband, and do not extend it.
Sub-pixel image localization
The location of a single source can be determined by computing the "center of gravity" (centroid) of the light distribution extending over several adjacent pixels (see figure on the left). Provided that there is enough light, this can be achieved with arbitrary precision, very much better than pixel width of the detecting apparatus and the resolution limit for the decision of whether the source is single or double. This technique, which requires the presupposition that all the light comes from a single source, is at the basis of what has become known as super-resolution microscopy, e.g. stochastic optical reconstruction microscopy (STORM), where fluorescent probes attached to molecules give nanoscale distance information. It is also the mechanism underlying visual hyperacuity.
Bayesian induction beyond traditional diffraction limit
Some object features, though beyond the diffraction limit, may be known to be associated with other object features that are within the limits and hence contained in the image. Then conclusions can be drawn, using statistical methods, from the available image data about the presence of the full object. The classical example is Toraldo di Francia's proposition of judging whether an image is that of a single or double star by determining whether its width exceeds the spread from a single star. This can be achieved at separations well below the classical resolution bounds, and requires the prior limitation to the choice "single or double?"
The approach can take the form of extrapolating the image in the frequency domain, by assuming that the object is an analytic function, and that we can exactly know the function values in some interval. This method is severely limited by the ever-present noise in digital imaging systems, but it can work for radar, astronomy, microscopy or magnetic resonance imaging. More recently, a fast single image super-resolution algorithm based on a closed-form solution to problems has been proposed and demonstrated to accelerate most of the existing Bayesian super-resolution methods significantly.
Aliasing
Geometrical SR reconstruction algorithms are possible if and only if the input low resolution images have been under-sampled and therefore contain aliasing. Because of this aliasing, the high-frequency content of the desired reconstruction image is embedded in the low-frequency content of each of the observed images. Given a sufficient number of observation images, and if the set of observations vary in their phase (i.e. if the images of the scene are shifted by a sub-pixel amount), then the phase information can be used to separate the aliased high-frequency content from the true low-frequency content, and the full-resolution image can be accurately reconstructed.
In practice, this frequency-based approach is not used for reconstruction, but even in the case of spatial approaches (e.g. shift-add fusion), the presence of aliasing is still a necessary condition for SR reconstruction.
Technical implementations
There are many both single-frame and multiple-frame variants of SR. Multiple-frame SR uses the sub-pixel shifts between multiple low resolution images of the same scene. It creates an improved resolution image fusing information from all low resolution images, and the created higher resolution images are better descriptions of the scene. Single-frame SR methods attempt to magnify the image without producing blur. These methods use other parts of the low resolution images, or other unrelated images, to guess what the high-resolution image should look like. Algorithms can also be divided by their domain: frequency or space domain. Originally, super-resolution methods worked well only on grayscale images, but researchers have found methods to adapt them to color camera images. Recently, the use of super-resolution for 3D data has also been shown.
Research
There is promising research on using deep convolutional networks to perform super-resolution. In particular work has been demonstrated showing the transformation of a 20x microscope image of pollen grains into a 1500x scanning electron microscope image using it. While this technique can increase the information content of an image, there is no guarantee that the upscaled features exist in the original image and deep convolutional upscalers should not be used in analytical applications with ambiguous inputs. These methods can hallucinate image features, which can make them unsafe for medical use.
See also
Optical resolution
Oversampling
Video super-resolution
Single-particle trajectory
Superoscillation
References
Other related work
;
;
;
Image processing
Signal processing
Imaging | Super-resolution imaging | [
"Technology",
"Engineering"
] | 2,080 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
1,398,687 | https://en.wikipedia.org/wiki/Deep-ocean%20Assessment%20and%20Reporting%20of%20Tsunamis | Deep-ocean Assessment and Reporting of Tsunamis (DART) is a component of an enhanced tsunami warning system.
By logging changes in seafloor temperature and pressure, and transmitting the data via a surface buoy to a ground station by satellite, DART enables instant, accurate tsunami forecasts. In Standard Mode, the system logs the data at 15-minute intervals, and in Event Mode, every 15 seconds. A 2-way communication system allows the ground station to switch DART into Event Mode whenever detailed reports are needed.
Stations
Each DART station consists of a surface buoy and a seafloor bottom pressure recording (BPR) package that detects water pressure changes caused by tsunamis. The surface buoy receives transmitted information from the BPR via an acoustic link and then transmits data to a satellite, which retransmits the data to ground stations for immediate dissemination to NOAA's Tsunami Warning Centers, NOAA's National Data Buoy Center, and NOAA's Pacific Marine Environmental Laboratory (PMEL).
The Iridium commercial satellite phone network is used for communication with 31 of the buoys.
When on-board software identifies a possible tsunami, the station leaves standard mode and begins transmitting in event mode. In standard mode, the station reports water temperature and pressure (which are converted to sea-surface height, not unlike a depth gauge or a pressure tide gauge) every At the start of event mode, the buoy reports measurements every for several minutes, followed by 1-minute averages for
The first-generation DART I stations had one-way communication ability, and relied solely on the software's ability to detect a tsunami to trigger event mode and rapid data transmission. In order to avoid false positives, the detection threshold was set relatively high, presenting the possibility that a tsunami with a low amplitude could fail to trigger the station.
The second-generation DART II is equipped for two-way communication, allowing tsunami forecasters to place the station in event mode in anticipation of a tsunami's arrival.
Deep-ocean Assessment and Reporting of Tsunamis is officially abbreviated and trademarked as DART.
Background
National Oceanic and Atmospheric Administration's NOAA have placed Deep-ocean Assessment and Reporting of Tsunami stations in particular areas, areas with a history of generating large tsunamis, to be completely positive that the detection of tsunamis is to be as fast as possible. The year of 2001 was the completion of the first six tsunami detection buoys placed along the northern Pacific Ocean coast. In 2005 the United States president George W. Bush announced a two-year, $3.5 million, plan to install tsunami detecting buoys in the Atlantic and the Caribbean ocean in order to expand the nation's capabilities to detect tsunamis. With the Pacific Ocean creating 85 percent of the world's tsunamis
, the majority of new tsunami detecting buoy equipment will be installed around the pacific rim, while only seven buoys will be placed along the Atlantic and Caribbean coast because even though tsunamis are rare in the Atlantic, there have been records of deadly tsunamis being reported in the Atlantic. Roughly $13.8 million of the governments funding was used to procure and install exactly 32 pressure sensors on the ocean bottom to detect tsunamis and collect data such as the height and speed of the approaching tsunami. This proposed system, stated by the John H. Marburger the White House's Office of Science and Technology Policy, should provide the United States' Tsunami Warning Centers with nearly one hundred percent coverage for any approaching tsunamis as well as decline all false alarms to just about zero.
During all these improvements and upgrades of the current system, roughly three fourths of the tsunami warnings were discovered to be unnecessary and a waste of money. A few years later in 2008 there are now roughly 40 tsunami detection buoys placed in the Pacific Ocean by NOAA. The upgraded DART buoys were originally developed to maintain but to mostly improve the timing of detection of a tsunami. With an improved detection time for tsunamis, that is more time to save lives, warning guidance and international coordination.
History
The DART buoy technology was developed at PMEL, with the first prototype deployed off the coast of Oregon in 1995. In 2004, the DART® stations were transitioned from research at PMEL to operational service at the National Data Buoy Center (NDBC), and PMEL and NDBC received the Department of Commerce Gold Medal "for the creation and use of a new moored buoy system to provide accurate and timely warning information on tsunamis".
In the wake of the 2004 Indian Ocean earthquake and its subsequent tsunamis, plans were announced to deploy an additional 32 DART II buoys around the world. These would include stations in the Caribbean and Atlantic Ocean for the first time.
The United States' array was completed in 2008 totaling 39 stations in the Pacific Ocean, Atlantic Ocean, and Caribbean Sea. The international community has also taken an interest in DART buoys and as of 2009 Australia, Chile, Indonesia and Thailand have deployed DART buoys to use as part of each country's tsunami warning system.
Overview
Deep-ocean Assessment and Reporting of Tsunami (DART) buoy systems are made up of three parts. There is a bottom pressure recorder (BPR) anchored to the bottom of the sea floor. A moored surface buoy connects to the bottom pressure recorder via an acoustic transmission link. The link sends data from the anchored pressure recorder to the surface buoy. The surface buoy sends the data by radio to satellites such as the Iridium system. From the satellites, the data travels by radio to the ground, then to the system office by conventional telecommunications.
The surface buoy has a two and a half meter diameter fiberglass disk covered with foam and has a gross displacement of 4000 kg. The mooring line connecting the surface buoy and the pressure recorder is a nineteen millimeter nylon line that has a tensile strength of 7100 kg.
The data sent from the anchored bottom pressure recorder to the surface buoy consists of the temperature and pressure of the surrounding sea water. It retrieves and releases data every 15 seconds to get an average reading of the current weather conditions.
A very stable, long lived, very high resolution pressure sensor is a critical enabling technology for DART's bottom pressure recorder. It is a resonant quartz crystal strain gauge with a bourdon tube force collector. When compensated for temperature, this sensor has a pressure resolution of approximately 1mm of water when measuring pressure at a depth of several kilometers.
Once the data reaches the surface buoy, the pressure data is converted to an average height of the waves surrounding the buoy. The temperature of the surrounding sea water is important to the calculations because temperature affects the water's density, thus the pressure, and therefore the sea temperature is required to accurately measure the height of the ocean swells. Because the swell sizes of the ocean vary constantly, the system has two modes of reporting data, standard mode and event mode. Standard mode is the more common mode. Every 15 minutes, it sends the estimated sea surface height and the time of the measurement.
If the software receives data that is not within the recent data averages, the system automatically switches to event mode. Event mode transmits data every 15 seconds and calculates the average sea surface height and the time when data being recorded every minute. If no further data is received that is not out of the averages being calculated at the time, it switches back to standard mode after four hours. When NOAA released the first six DART buoys, their system only had a one way communication system. It was not until 2005 when the first generation of the DART buoy was upgraded to the second generation of the DART buoy. After 2005 the Dart buoys started using Iridium communication satellites that enabled you to not only retrieve information but to also send information to a DART. The two-way communications between Tsunami Warning Centers and the pressure recorder made it possible to manually set DART buoys in event mode in case of any suspicion of a possible in-coming tsunamis. To make sure communications are always in contact and secure, the DART buoys have two communication systems; two independent and a redundant communication system. With these updated and reliable communicating systems, data can now be sent where it needs to be sent around the world.
See also
Global Sea Level Observing System
Tsunami warning system
Tsunami
NOAA Center for Tsunami Research
References
External links
NOAA NDBC Deep-ocean Assessment and Reporting of Tsunamis (DART)
NOAA Center for Tsunami Research Deep-ocean Assessment and Reporting of Tsunamis (DART)
Realtime DART buoy data from the National Data Buoy Center
High-resolution archived data from the National Centers for Environmental Information
Social & Economic Benefits of the DART system from "NOAA Socioeconomics" website initiative
NOAA Tsunami website
National Tsunami Hazard Mitigation Program
Physical oceanography
Tsunami
Earth observation platforms | Deep-ocean Assessment and Reporting of Tsunamis | [
"Physics"
] | 1,799 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
1,399,190 | https://en.wikipedia.org/wiki/Assisted%20reproductive%20technology | Assisted reproductive technology (ART) includes medical procedures used primarily to address infertility. This subject involves procedures such as in vitro fertilization (IVF), intracytoplasmic sperm injection (ICSI), and cryopreservation of gametes and embryos, and the use of fertility medication. When used to address infertility, ART may also be referred to as fertility treatment. ART mainly belongs to the field of reproductive endocrinology and infertility. Some forms of ART may be used with regard to fertile couples for genetic purpose (see preimplantation genetic diagnosis). ART may also be used in surrogacy arrangements, although not all surrogacy arrangements involve ART.
The existence of sterility will not always require ART to be the first option to consider, as there are occasions when its cause is a mild disorder that can be solved with more conventional treatments or with behaviors based on promoting health and reproductive habits.
Procedures
General
With ART, the process of sexual intercourse is bypassed and fertilization of the oocytes occurs in the laboratory environment (i.e., in vitro fertilization).
In the US, the Centers for Disease Control and Prevention (CDC) defines ART to include "all fertility treatments in which both eggs and sperm are handled. In general, ART procedures involve surgically removing eggs from a woman's ovaries, combining them with sperm in the laboratory, and returning them to the woman's body or donating them to another woman." According to CDC, "they do not include treatments in which only sperm are handled (i.e., intrauterine—or artificial—insemination) or procedures in which a woman takes medicine only to stimulate egg production without the intention of having eggs retrieved."
In Europe, ART also excludes artificial insemination and includes only procedures where oocytes are handled.
The World Health Organization (WHO), also defines ART this way.
Ovulation induction
Ovulation induction is usually used in the sense of stimulation of the development of ovarian follicles by fertility medication to reverse anovulation or oligoovulation. These medications are given by injection for 8 to 14 days. A health care provider closely monitors the development of the eggs using transvaginal ultrasound and blood tests to assess follicle growth and estrogen production by the ovaries. When follicles have reached an adequate size and the eggs are mature enough, an injection of the hormone hCG initiates the ovulation process. Egg retrieval should occur 36 hours before ovulation.
In vitro fertilization
In vitro fertilization is the technique of letting fertilization of the male and female gametes (sperm and egg) occur outside the female body.
Techniques usually used in in vitro fertilization include:
Transvaginal ovum retrieval (OVR) is the process whereby a small needle is inserted through the back of the vagina and guided via ultrasound into the ovarian follicles to collect the fluid that contains the eggs.
Embryo transfer is the step in the process whereby one or several embryos are placed into the uterus of the female with the intent to establish a pregnancy.
Less commonly used techniques in in vitro fertilization are:
Assisted zona hatching (AZH) is performed shortly before the embryo is transferred to the uterus. A small opening is made in the outer layer surrounding the egg in order to help the embryo hatch out and aid in the implantation process of the growing embryo.
Intracytoplasmic sperm injection (ICSI) is beneficial in the case of male factor infertility where sperm counts are very low or failed fertilization occurred with previous IVF attempt(s). The ICSI procedure involves a single sperm carefully injected into the center of an egg using a microneedle. With ICSI, only one sperm per egg is needed. Without ICSI, you need between 50,000 and 100,000. This method is also sometimes employed when donor sperm is used.
Autologous endometrial coculture is a possible treatment for patients who have failed previous IVF attempts or who have poor embryo quality. The patient's fertilized eggs are placed on top of a layer of cells from the patient's own uterine lining, creating a more natural environment for embryo development.
In zygote intrafallopian transfer (ZIFT), egg cells are removed from the woman's ovaries and fertilized in the laboratory; the resulting zygote is then placed into the fallopian tube.
Cytoplasmic transfer is the technique in which the contents of a fertile egg from a donor are injected into the infertile egg of the patient along with the sperm.
Egg donors are resources for women with no eggs due to surgery, chemotherapy, or genetic causes; or with poor egg quality, previously unsuccessful IVF cycles or advanced maternal age. In the egg donor process, eggs are retrieved from a donor's ovaries, fertilized in the laboratory with the sperm from the recipient's partner, and the resulting healthy embryos are returned to the recipient's uterus.
Sperm donation may provide the source for the sperm used in IVF procedures where the male partner produces no sperm or has an inheritable disease, or where the woman being treated has no male partner.
Preimplantation genetic diagnosis (PGD) involves the use of genetic screening mechanisms such as fluorescent in-situ hybridization (FISH) or comparative genomic hybridization (CGH) to help identify genetically abnormal embryos and improve healthy outcomes.
Embryo splitting can be used for twinning to increase the number of available embryos.
Pre-implantation genetic diagnosis
A pre-implantation genetic diagnosis procedure may be conducted on embryos prior to implantation (as a form of embryo profiling), and sometimes even of oocytes prior to fertilization. PGD is considered in a similar fashion to prenatal diagnosis. PGD is an adjunct to ART procedures, and requires in vitro fertilization to obtain oocytes or embryos for evaluation. Embryos are generally obtained through blastomere or blastocyst biopsy. The latter technique has proved to be less deleterious for the embryo, therefore it is advisable to perform the biopsy around day 5 or 6 of development. Sex selection is the attempt to control the sex of offspring to achieve a desired sex in case of X chromosome linked diseases. It can be accomplished in several ways, both pre- and post-implantation of an embryo, as well as at birth. Pre-implantation techniques include PGD, but also sperm sorting.
Others
Other assisted reproduction techniques include:
Mitochondrial replacement therapy (MRT, sometimes called mitochondrial donation) is the replacement of mitochondria in one or more cells to prevent or ameliorate disease. MRT originated as a special form of IVF in which some or all of the future baby's mitochondrial DNA comes from a third party. This technique is used in cases when mothers carry genes for mitochondrial diseases. The therapy is approved for use in the United Kingdom.
In gamete intrafallopian transfer (GIFT), a mixture of sperm and eggs is placed directly into a woman's fallopian tubes using laparoscopy following a transvaginal ovum retrieval.
Reproductive surgery, treating e.g. fallopian tube obstruction and vas deferens obstruction, or reversing a vasectomy by a reverse vasectomy. In surgical sperm retrieval (SSR), the reproductive urologist obtains sperm from the vas deferens, epididymis or directly from the testis in a short outpatient procedure.
By cryopreservation, eggs, sperm and reproductive tissue can be preserved for later IVF.
Risks
The majority of IVF-conceived infants do not have birth defects.
However, some studies have suggested that assisted reproductive technology is associated with an increased risk of birth defects.
Artificial reproductive technology is becoming more available. Early studies suggest that there could be an increased risk for medical complications with both the mother and baby. Some of these include low birth weight, placental insufficiency, chromosomal disorders, preterm deliveries, gestational diabetes, and pre-eclampsia (Aiken and Brockelsby).
In the largest U.S. study, which used data from a statewide registry of birth defects,
6.2% of IVF-conceived children had major defects, as compared with 4.4% of naturally conceived children matched for maternal age and other factors (odds ratio, 1.3; 95% confidence interval, 1.00 to 1.67). ART carries with it a risk for heterotopic pregnancy (simultaneous intrauterine and extrauterine pregnancy).
The main risks are:
Genetic disorders
Low birth weight. In IVF and ICSI, a risk factor is the decreased expression of proteins in energy metabolism; Ferritin light chain and ATP5A1.
Preterm birth. Low birth weight and preterm birth are strongly associated with many health problems, such as visual impairment and cerebral palsy. Children born after IVF are roughly twice as likely to have cerebral palsy.
Sperm donation is an exception, with a birth defect rate of almost a fifth compared to the general population. It may be explained by that sperm banks accept only people with high sperm count.
Germ cells of the mouse normally have a frequency of spontaneous point mutations that is 5 to 10-fold lower than that in somatic cells from the same individual. This low frequency in the germline leads to embryos that have a low frequency of point mutations in the next generation. No significant differences were observed in the frequency or spectrum of mutations between naturally conceived fetuses and assisted-conception fetuses. This suggests that with respect to the maintenance of genetic integrity assisted conception is safe.
Current data indicate little or no increased risk for postpartum depression among women who use ART.
Study results indicate that ART can affect both women and men's sexual health negatively.
Usage of assisted reproductive technology including ovarian stimulation and in vitro fertilization have been associated with an increased overall risk of childhood cancer in the offspring, which may be caused by the same original disease or condition that caused the infertility or subfertility in the mother or father.
That said, In a landmark paper by Jacques Balayla et al. it was determined that infants born after ART have similar neurodevelopment than infants born after natural conception.
ART may also pose risks to the mother. A large US database study compared pregnancy outcomes among 106,000 assisted conception pregnancies with 34 million natural conception pregnancies. It found that assisted conception pregnancies were associated with an increased risk of cardiovascular diseases, including acute kidney injury and arrhythmia. Assisted conception pregnancies were also associated with a higher risk of caesarean delivery and premature birth.
In theory, ART can solve almost all reproductive problems, except for severe pathology or the absence of a uterus (or womb), using specific gamete or embryo donation techniques. However, this does not mean that all women can be treated with assisted reproductive techniques, or that all women who are treated will achieve pregnancy.
Usage
As a result of the 1992 Fertility Clinic Success Rate and Certification Act, the CDC is required to publish the annual ART success rates at U.S. fertility clinics. Assisted reproductive technology procedures performed in the U.S. has over than doubled over the last 10 years, with 140,000 procedures in 2006, resulting in 55,000 births.
In Australia, 3.1% of births in the late 2000's are a result of ART.
The most common reasons for discontinuation of fertility treatment have been estimated to be: postponement of treatment (39%), physical and psychological burden (19%), psychological burden (14%), physical burden (6.32%), relational and personal problems (17%), personal reasons (9%), relational problems (9%), treatment rejection (13%) and organizational (12%) and clinic (8%) problems.
By country
United States
Many Americans do not have insurance coverage for fertility investigations and treatments. Many states are starting to mandate coverage, and the rate of use is 278% higher in states with complete coverage.
There are some health insurance companies that cover diagnosis of infertility, but frequently once diagnosed will not cover any treatment costs.
Approximate treatment/diagnosis costs in the United States, with inflation, as of (US$):
Initial workup: hysteroscopy, hysterosalpingogram, blood tests ~$
Sonohysterogram (SHG) ~ $–$
Clomiphene citrate cycle ~ $–$
IVF cycle ~ $–$
Use of a surrogate mother to carry the child – dependent on arrangements
Another way to look at costs is to determine the expected cost of establishing a pregnancy. Thus, if a clomiphene treatment has a chance to establish a pregnancy in 8% of cycles and costs $, the expected cost is $ to establish a pregnancy, compared to an IVF cycle (cycle fecundity 40%) with a corresponding expected cost of $ ($ × 40%).
For the community as a whole, the cost of IVF on average pays back by 700% by tax from future employment by the conceived human being.
European Union
In Europe, 157,500 children were born using assisted reproductive technology in 2015, according to the European Society of Human Reproduction and Embryology (ESHRE). But there are major differences in legislation across the Old Continent.
A European directive fixes standards concerning the use of human tissue and cells, but all ethical and legal questions on ART remain the prerogative of EU member states.
Across Europe, the legal criteria per availability vary somewhat. In 11 countries all women may benefit; in 8 others only heterosexual couples are concerned; in 7 only single women; and in 2 (Austria and Germany) only lesbian couples.
Spain was the first European country to open ART to all women, in 1977, the year the first sperm bank was opened there. In France, the right to ART is accorded to all women since 2019. In the last 15 years, legislation has evolved quickly. For example, Portugal made ART available in 2006 with conditions very similar to those in France, before amending the law in 2016 to allow lesbian couples and single women to benefit. Italy clarified its uncertain legal situation in 2004 by adopting Europe's strictest laws: ART is only available to heterosexual couples, married or otherwise, and sperm donation is prohibited.
Today, 21 countries provide partial public funding for ART treatment. The seven others, which do not, are Ireland, Cyprus, Estonia, Latvia, Luxembourg, Malta, and Romania.
Such subsidies are subject to conditions, however. In Belgium, a fixed payment of €1,073 is made for each full cycle of the IVF process. The woman must be aged under 43 and may not carry out more than six cycles of ART. There is also a limit on the number of transferable embryos, which varies according to age and the number of cycles completed.
In France, ART is subsidized in full by national health insurance for women up to age 43, with limits of 4 attempts at IVF and 6 at artificial insemination.
Germany tightened its conditions for public funding in 2004, which caused a sharp drop in the number of ART cycles carried out, from more than 102,000 in 2003 to fewer than 57,000 the following year. Since then the figure has remained stable.
17 countries limit access to ART according to the age of the woman. 10 countries have established an upper age limit, varying from 40 (Finland, Netherlands) to 50 (including Spain, Greece and Estonia).
Since 1994, France is one of a number of countries (including Germany, Spain, and the UK) which use the somewhat vague notion of "natural age of procreation". In 2017, the steering council of France's Agency of Biomedicine established an age limit of 43 for women using ART.
10 countries have no age limit for ART. These include Austria, Hungary, Italy and Poland.
Most European countries allow donations of gametes by third parties. But the situations vary depending on whether sperm or eggs are concerned. Sperm donations are authorized in 20 EU member states; in 11 of them anonymity is allowed. Egg donations are possible in 17 states, including 8 under anonymous conditions.
On 12 April, the Council of Europe adopted a recommendation which encourages an end to anonymity. In the UK, anonymous sperm donations ended in 2005 and children have access to the identity of the donor when they reach adulthood.
In France, the principle of anonymous donations of sperm or embryos is maintained in the law of bioethics of 2011, but a new bill under discussion may change the situation.
United Kingdom
In the United Kingdom, all patients have the right to preliminary testing, provided free of charge by the National Health Service (NHS). However, treatment is not widely available on the NHS and there can be long waiting lists. Many patients therefore pay for immediate treatment within the NHS or seek help from private clinics.
In 2013, the National Institute for Health and Care Excellence (NICE) published new guidelines about who should have access to IVF treatment on the NHS in England and Wales.
The guidelines say women aged between 40 and 42 should be offered one cycle of IVF on the NHS if they have never had IVF treatment before, have no evidence of low ovarian reserve (this is when eggs in the ovary are low in number, or low in quality), and have been informed of the additional implications of IVF and pregnancy at this age. However, if tests show IVF is the only treatment likely to help them get pregnant, women should be referred for IVF straight away.
This policy is often modified by local Clinical Commissioning Groups, in a fairly blatant breach of the NHS Constitution for England which provides that patients have the right to drugs and treatments that have been recommended by NICE for use in the NHS. For example, the Cheshire, Merseyside and West Lancashire Clinical Commissioning Group insists on additional conditions:
The person undergoing treatment must have commenced treatment before her 40th birthday;
The person undergoing treatment must have a BMI of between 19 and 29;
Neither partner must have any living children, from either the current or previous relationships. This includes adopted as well as biological children; and,
Sub-fertility must not be the direct result of a sterilisation procedure in either partner (this does not include conditions where sterilisation occurs as a result of another medical problem). Couples who have undertaken a reversal of their sterilisation procedure are not eligible for treatment.
Canada
Some treatments are covered by OHIP (public health insurance) in Ontario and others are not. Women with bilaterally blocked fallopian tubes and are under the age of 40 have treatment covered but are still required to pay test fees (around CA$3,000–4,000). Coverage varies in other provinces. Most other patients are required to pay for treatments themselves.
Israel
Israel's national health insurance, which is mandatory for all Israeli citizens, covers nearly all fertility treatments. IVF costs are fully subsidized up to the birth of two children for all Israeli women, including single women and lesbian couples. Embryo transfers for purposes of gestational surrogacy are also covered.
Germany
On 27 January 2009, the Federal Constitutional Court ruled that it is unconstitutional, that the health insurance companies have to bear only 50% of the cost for IVF. On 2 March 2012, the Federal Council has approved a draft law of some federal states, which provides that the federal government provides a subsidy of 25% to the cost. Thus, the share of costs borne for the pair would drop to just 25%. Since July 2017, assisted reproductive technology is also allowed for married lesbian couples, as German parliament allowed same-sex marriages in Germany.
France
In July 2020, the French Parliament allowed assisted reproductive technology also for lesbian couples and single women.
Cuba
Cuban sources mention that assisted reproduction is completely legal and free in the country.
India
The Government of India has notified the Surrogacy (Regulation) Act 2021 and the Assisted Reproductive Technology (Regulation) Act 2021 to regulate the practice of ART. Prior to that, the National Guidelines for Accreditation, Supervision and Regulation of ART Clinics in India published by the Ministry for Health and Family Welfare, Government of India in the year 2005 was governing the field. Indian law recognises the right of a single woman, who is a major, to have children through ART.
Society and culture
Ethics
Some couples may find it difficult to stop treatment despite very bad prognoses, resulting in futile therapies. This has the potential to give ART providers a difficult decision of whether to continue or refuse treatment.
Some assisted reproductive technologies have the potential to be harmful to both the mother and the child, posing a psychological or physical health risk, which may affect the ongoing use of these treatments.
In Israel, there is research supporting using ART, including recycled lab materials from the IVF process, to help women work through some of these mixed emotions.
Fictional representation
Films and other fiction depicting emotional struggles of assisted reproductive technology have had an upswing in the latter part of the 2000s decade, although the techniques have been available for decades. As ART becomes more utilized, the number of people that can relate to it by personal experience in one way or another is growing.
For specific examples, refer to the fiction sections in individual subarticles, e.g. surrogacy, sperm donation and fertility clinic.
In addition, reproduction and pregnancy in speculative fiction has been present for many decades.
Historical facts
25 July 1978, Louise Brown was born; this was the first successful birth of a child after IVF treatment. The procedure took place at Dr Kershaw's Cottage Hospital (now Dr Kershaw's Hospice) in Royton, Oldham, England. Patrick Steptoe (gynaecologist) and Robert Edwards (physiologist) worked together to develop the IVF technique. Steptoe described a new method of egg extraction and Edwards were carrying out a way to fertilise eggs in the lab. Robert G. Edwards was awarded the Nobel Prize in Physiology or Medicine in 2010, but not Steptoe because the Nobel Prize is not awarded posthumously.
The first successful birth by ICSI (intracytoplasmic sperm injection) took place on 14 January 1992. The technique was developed by Gianpiero D. Palermo at the Vrije Universiteit Brussel, in the Center for Reproductive Medicine in Brussels. Actually, the discovery was made by a mistake when a spermatozoid was put into the cytoplasm.
See also
Artificial uterus
Artificial insemination
Fertility fraud
Human cloning
Ova bank
Sperm bank
Sperm donation
Spontaneous conception, the unassisted conception of a subsequent child after prior use of assisted reproductive technology
Egg donation
Ralph L. Brinster
Religious response to ART
Repository for Germinal Choice
References
External links
Centers for Disease Control and Prevention (CDC), Assisted Reproductive Technology
Applied genetics
Biotechnology
Bioethics
Fertility medicine
Genetic engineering
Human reproduction
Ideologies
Liberalism
Medical ethics
Obstetrical procedures
Reproductive rights
Social philosophy
Social theories
Transhumanism | Assisted reproductive technology | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 4,799 | [
"Bioethics",
"Biological engineering",
"Genetic engineering",
"Transhumanism",
"Biotechnology",
"nan",
"Ethics of science and technology",
"Medical technology",
"Assisted reproductive technology",
"Molecular biology"
] |
1,399,392 | https://en.wikipedia.org/wiki/Embryo%20transfer | Embryo transfer refers to a step in the process of assisted reproduction in which embryos are placed into the uterus of a female with the intent to establish a pregnancy. This technique - which is often used in connection with in vitro fertilization (IVF) - may be used in humans or in other animals, in which situations and goals may vary.
Embryo transfer can be done at day two or day three, or later in the blastocyst stage, which was first performed in 1984.
Factors that can affect the success of embryo transfer include the endometrial receptivity, embryo quality, and embryo transfer technique.
Fresh versus frozen
Embryos can be either "fresh" from fertilized egg cells of the same menstrual cycle, or "frozen", that is they have been generated in a preceding cycle and undergone embryo cryopreservation, and are thawed just prior to the transfer, which is then termed "frozen embryo transfer" (FET). The outcome from using cryopreserved embryos has uniformly been positive with no increase in birth defects or development abnormalities, also between fresh versus frozen eggs used for intracytoplasmic sperm injection (ICSI). In fact, pregnancy rates are increased following FET, and perinatal outcomes are less affected, compared to embryo transfer in the same cycle as ovarian hyperstimulation was performed. The endometrium is believed to not be optimally prepared for implantation following ovarian hyperstimulation, and therefore frozen embryo transfer avails for a separate cycle to focus on optimizing the chances of successful implantation. Children born from vitrified blastocysts have significantly higher birthweight than those born from non-frozen blastocysts. When transferring a frozen-thawed oocyte, the chance of pregnancy is essentially the same whether it is transferred in a natural cycle or one with ovulation induction.
There is probably little or no difference between FET and fresh embryo transfers in terms of live birth rate and ongoing pregnancy rate and the risk of ovarian hyperstimulation syndrome may be less using the "freeze all" strategy. The risk of having a large-for-gestational-age baby and higher birth rate, in addition to maternal hypertensive disorders of pregnancy may be increased using a "freeze all" strategy.
Uterine preparation
In the human, the uterine lining (endometrium) needs to be appropriately prepared so that the embryo can implant. In a natural cycle the embryo transfer takes place in the luteal phase at a time where the lining is appropriately undeveloped in relation to the status of the present Luteinizing Hormone. In a stimulated or cycle where a "frozen" embryo is transferred, the recipient woman could be given first estrogen preparations (about 2 weeks), then a combination of estrogen and progesterone so that the lining becomes receptive for the embryo. The time of receptivity is the implantation window. A scientific review in 2013 came to the conclusion that it is not possible to identify one method of endometrium preparation in frozen embryo transfer as being more effective than another.
Limited evidence also supports removal of cervical mucus before transfer.
Timing
Embryo transfer can be performed after various durations of embryo culture, conferring different stages in embryogenesis. The main stages at which embryo transfer is performed are cleavage stage (day 2 to 4 after co-incubation) or the blastocyst stage (day 5 or 6 after co-incubation).
Because in vivo, a cleavage stage embryo still resides in the fallopian tube and it is known that the nutritional environment of the uterus is different from that of the tube, it is postulated that this may cause stress on the embryo if transferred on day 3 resulting in reduced implantation potential. A blastocyst stage embryo does not have this problem as it is best suited for the uterine environment
Embryos who reach the day 3 cell stage can be tested for chromosomal or specific genetic defects prior to possible transfer by preimplantation genetic diagnosis (PGD). Transferring at the blastocyst stage confers a significant increase in live birth rate per transfer, but also confers a decreased number of embryos available for transfer and embryo cryopreservation, so the cumulative clinical pregnancy rates are increased with cleavage stage transfer. It is uncertain whether there is any difference in live birth rate between transfer on day two or day three after fertilization.
Monozygotic twinning is not increased after blastocyst transfer compared with cleavage-stage embryo transfer.
There is a significantly higher odds of preterm birth (odds ratio 1.3) and congenital anomalies (odds ratio 1.3) among births having reached the blastocyst stage compared with cleavage stage. Because of increased female embryo mortality due to epigenetic modifications induced by extended culture, blastocyst transfer leads to more male births (56.1% male) versus 2 or 3 day transfer (a normal sex ratio of 51.5% male).
Embryo selection
Laboratories have developed grading methods to judge oocyte and embryo quality. In order to optimise pregnancy rates, there is significant evidence that a morphological scoring system is the best strategy for the selection of embryos. Since 2009 where the first time-lapse microscopy system for IVF was approved for clinical use, morphokinetic scoring systems has shown to improve to pregnancy rates further. However, when all different types of time-lapse embryo imaging devices, with or without morphokinetic scoring systems, are compared against conventional embryo assessment for IVF, there is insufficient evidence of a difference in live-birth, pregnancy, stillbirth or miscarriage to choose between them. A small prospectively randomized study in 2016 reported poorer embryo quality and more staff time in an automated time-lapse embryo imaging device compared to conventional embryology. Active efforts to develop a more accurate embryo selection analysis based on Artificial Intelligence and Deep Learning are underway. Embryo Ranking Intelligent Classification Algorithm (ERICA), is a clear example. This Deep Learning software substitutes manual classifications with a ranking system based on an individual embryo's predicted genetic status in a non-invasive fashion. Studies on this area are still pending and current feasibility studies support its potential.
Procedure
The embryo transfer procedure starts by placing a speculum in the vagina to visualize the cervix, which is cleansed with saline solution or culture media. A transfer catheter is loaded with the embryos and handed to the clinician after confirmation of the patient's identity. The catheter is inserted through the cervical canal and advanced into the uterine cavity. Several types of catheters are used for this process, however, there is good evidence that using a soft vs a hard transfer catheter can increase the chances of clinical pregnancy.
There is good and consistent evidence of benefit in ultrasound guidance, that is, making an abdominal ultrasound to ensure correct placement, which is 1–2 cm from the uterine fundus. There is evidence of a significant increase in clinical pregnancy using ultrasound guidance compared with only "clinical touch", as well as performing the transfer with hyaluronic acid enriched transfer media. Anesthesia is generally not required. Single embryo transfers in particular require accuracy and precision in placement within the uterine cavity. The optimal target for embryo placement, known as the maximal implantation potential (MIP) point, is identified using 3D/4D ultrasound. However, there is limited evidence that supports deposition of embryos in the midportion of the uterus.
After insertion of the catheter, the contents are expelled and the embryos are deposited. Limited evidence supports making trial transfers before performing the procedure with embryos. After expulsion, the duration that the catheter remains inside the uterus has no effect on pregnancy rates. Limited evidence suggests avoiding negative pressure from the catheter after expulsion. After withdrawal, the catheter is handed to the embryologist, who inspects it for retained embryos.
In the process of zygote intrafallopian transfer (ZIFT), eggs are removed from the woman, fertilised, and then placed in the woman's fallopian tubes rather than the uterus.
Embryo number
A major issue is how many embryos should be transferred, since placement of multiple embryos carries a risk of multiple pregnancy. While the past physicians placed multiple embryos to increase the chance of pregnancy, this approach has fallen out of favor. Professional societies, and legislatures in many countries, have issued guidelines or laws to curtail the practice. There is low to moderate evidence that making a double embryo transfer during one cycle achieves a higher live birth rate than a single embryo transfer; but making two single embryo transfers in two cycles has the same live birth rate and would avoid multiple pregnancies.
The appropriate number of embryos to be transferred depends on the age of the woman, whether it is the first, second or third full IVF cycle attempt and whether there are top-quality embryos available. According to a guideline from The National Institute for Health and Care Excellence (NICE) in 2013, the number of embryos transferred in a cycle should be chosen as in following table:
e-SET
The technique of selecting only one embryo to transfer to the woman is called elective-single embryo transfer (e-SET) or, when embryos are at the blastocyst stage, it can also be called elective single blastocyst transfer (eSBT). It significantly lowers the risk of multiple pregnancies, compared with e.g. Double Embryo Transfer (DET) or double blastocyst transfer (2BT), with a twinning rate of approximately 3.5% in sET compared with approximately 38% in DET, or 2% in eSBT compared with approximately 25% in 2BT. At the same time, pregnancy rates is not significantly less with eSBT than with 2BT. That is, the cumulative live birth rate associated with single fresh embryo transfer followed by a single frozen and thawed embryo transfer is comparable with that after one cycle of double fresh embryo transfer. Furthermore, SET has better outcomes in terms of mean gestational age at delivery, mode of delivery, birthweight, and risk of neonatal intensive care unit necessity than DET. e-SET of embryos at the cleavage stage reduces the likelihood of live birth by 38% and multiple birth by 94%. Evidence from randomized, controlled trials suggests that increasing the number of e-SET attempts (fresh and/or frozen) results in a cumulative live birth rate similar to that of DET.
The usage of single embryo transfer is highest in Sweden (69.4%), but as low as 2.8% in the USA. Access to public funding for ART, availability of good cryopreservation facilities, effective education about the risks of multiple pregnancy, and legislation appear to be the most important factors for regional usage of single embryo transfer. Also, personal choice plays a significant role as many subfertile couples have a strong preference for twins.
Adjunctive procedures
It is uncertain whether the use of mechanical closure of the cervical canal following embryo transfer has any effect.
There is considerable evidence that prolonges bed rest (more than 20 minutes) after embryo transfer is associated with reduced chances of clinical pregnancy.
Using hyaluronic acid as an adherence medium for the embryo may increase live birth rates. There may be little or no benefit in having a full bladder, removal of cervical mucus, or flushing of the endometrial or endocervical cavity at the time of embryo transfer. Adjunctive antibiotics in the form of amoxicillin plus clavulanic acid probably does not increase the clinical pregnancy rate compared with no antibiotics. The use of Atosiban, G-CSF and hCG around the time of embryo transfer showed a trend towards increased clinical pregnancy rate.
For frozen-thawed embryo transfer or transfer of embryo from egg donation, no previous ovarian hyperstimulation is required for the recipient before transfer, which can be performed in spontaneous ovulatory cycles. Still, various protocols exist for frozen-thawed embryo transfers as well, such as protocols with ovarian hyperstimulation, protocols in which the endometrium is artificially prepared by estrogen and/or progesterone. There is some evidence that in cycles where the endometrium is artificially prepared by estrogen or progesterone, it may be beneficial to administer an additional drug that suppresses hormone production by the ovaries such as continuous administration of a gonadotropin releasing hormone agonist (GnRHa). For egg donation, there is evidence of a lower pregnancy rate and a higher cycle cancellation rate when the progesterone supplementation in the recipient is commenced prior to oocyte retrieval from the donor, as compared to commenced day of oocyte retrieval or the day after.
Seminal fluid contains several proteins that interact with epithelial cells of the cervix and uterus, inducing active gestational immune tolerance. There are significantly improved outcomes when women are exposed to seminal plasma around the time of embryo transfer, with statistical significance for clinical pregnancy, but not for ongoing pregnancy or live birth rates with the limited data available.
Follow-up
Patients usually start progesterone medication after egg (also called oocyte) retrieval. While daily intramuscular injections of progesterone-in-oil (PIO) have been the standard route of administration, PIO injections are not FDA-approved for use in pregnancy. A recent meta-analysis showed that the intravaginal route with an appropriate dose and dosing frequency is equivalent to daily intramuscular injections. In addition, a recent case-matched study comparing vaginal progesterone with PIO injections showed that live birth rates were nearly identical with both methods. A duration of progesterone administration of 11 days results in almost the same birth rates as longer durations.
Patients are also given estrogen medication in some cases after the embryo transfer. Pregnancy testing is done typically two weeks after egg retrieval.
Third-party reproduction
It is not necessary that the embryo transfer be performed on the female who provided the eggs. Thus another female whose uterus is appropriately prepared can receive the embryo and become pregnant.
Embryo transfer may be used where a woman who has eggs but no uterus and wants to have a biological baby; she would require the help of a gestational carrier or surrogate to carry the pregnancy. Also, a woman who has no eggs but a uterus may utilize egg donor IVF, in which case another woman would provide eggs for fertilization and the resulting embryos are placed into the uterus of the patient. Fertilization may be performed using the woman's partner's sperm or by using donor sperm. 'Spare' embryos which are created for another couple undergoing IVF treatment but which are then surplus to that couple's needs may also be transferred (called embryo donation). Embryos may be specifically created by using eggs and sperm from donors and these can then be transferred into the uterus of another woman. A surrogate may carry a baby produced by embryo transfer for another couple, even though neither she nor the 'commissioning' couple is biologically related to the child. Third party reproduction is controversial and regulated in many countries. Persons entering gestational surrogacy arrangements must make sense of an entirely new type of relationship that does not fit any of the traditional scripts we use to categorize relations as kinship, friendship, romantic partnership or market relations. Surrogates have the experience of carrying a baby that they conceptualize as not of their own kin, while intended mothers have the experience of waiting through nine months of pregnancy and transitioning to motherhood from outside of the pregnant body. This can lead to new conceptualizations of body and self.
History
The first transfer of an embryo from one human to another resulting in pregnancy was reported in July 1983 and subsequently led to the announcement of the first human birth 3 February 1984. This procedure was performed at the Harbor UCLA Medical Center under the direction of Dr. John Buster and the University of California at Los Angeles School of Medicine.
In the procedure, an embryo that was just beginning to develop was transferred from one woman in whom it had been conceived by artificial insemination to another woman who gave birth to the infant 38 weeks later. The sperm used in the artificial insemination came from the husband of the woman who bore the baby.
This scientific breakthrough established standards and became an agent of change for women with infertility and for women who did not want to pass on genetic disorders to their children. Donor embryo transfer has given women a mechanism to become pregnant and give birth to a child that will contain their husband's genetic makeup. Although donor embryo transfer as practiced today has evolved from the original non-surgical method, it now accounts for approximately 5% of in vitro fertilization recorded births.
Prior to this, thousands of women who were infertile, had adoption as the only path to parenthood. This set the stage to allow open and candid discussion of embryo donation and transfer. This breakthrough has given way to the donation of human embryos as a common practice similar to other donations such as blood and major organ donations. At the time of this announcement the event was captured by major news carriers and fueled healthy debate and discussion on this practice which impacted the future of reproductive medicine by creating a platform for further advancements in woman's health.
This work established the technical foundation and legal-ethical framework surrounding the clinical use of human oocyte and embryo donation, a mainstream clinical practice, which has evolved over the past 25 years.
Effectiveness
Fresh blastocyst (day 5 to 6) stage transfer seems to be more effective than cleavage (day 2 or 3) stage transfer in assisted reproductive technologies. The Cochrane study showed a small improvement in live birth rate per couple for blastocyst transfers. This would mean that for a typical rate of 31% in clinics that use early cleavage stage cycles, the rate would increase to 32% to 41% live births if clinics used blastocyst transfer. Recent systematic review showed that along with selection of embryo, the techniques followed during transfer procedure may result in successful pregnancy outcome. The following interventions are supported by the literature for improving pregnancy rates:
•
Abdominal ultrasound guidance for embryo transfer
•
Removal of cervical mucus
•
Use of soft embryo transfer catheters
•
Placement of embryo transfer tip in the upper or middle (central) area of the uterine cavity, greater than 1 cm from the fundus, for embryo expulsion
•
Immediate ambulation once the embryo transfer procedure is completed
Embryo transfer in animals
Embryo transfer techniques allow top quality female livestock to have a greater influence on the genetic advancement of a herd or flock in much the same way that artificial insemination has allowed greater use of superior sires. ET also allows the continued use of animals such as competition mares to continue training and showing, while producing foals. The general epidemiological aspects of embryo transfer indicates that the transfer of embryos provides the opportunity to introduce genetic material into populations of livestock while greatly reducing the risk for transmission of infectious diseases. Recent developments in the sexing of embryos before transfer and implanting has great potential in the dairy and other livestock industries.
Embryo transfer is also used in laboratory mice. For example, embryos of genetically modified strains that are difficult to breed or expensive to maintain may be stored frozen, and only thawed and implanted into a pseudopregnant dam when needed.
On February 19, 2020, the first pair of Cheetah cubs to be conceived through embryo transfer from a surrogate cheetah mother was born at Columbus Zoo in Ohio.
Frozen embryo transfer in animals
The development of various methods of cryopreservation of bovine embryos improved embryo transfer technique considerably efficient technology, no longer depending on the immediate readiness of suitable recipients. Pregnancy rates are just slightly less than those achieved with fresh embryos. Recently, the use of cryoprotectants such as ethylene glycol has permitted the direct transfer of bovine embryos. The world's first live crossbred bovine calf produced under tropical conditions by Direct Transfer (DT) of embryo frozen in ethylene glycol freeze media was born on 23 June 1996. Dr. Binoy Sebastian Vettical of Kerala Livestock Development Board Ltd has produced the embryo stored frozen in Ethylene Glycol freeze media by slow programmable freezing (SPF) technique and transferred directly to recipient cattle immediately after thawing the frozen straw in water for the birth of this calf. In a study, in vivo produced crossbred bovine embryos stored frozen in ethylene glycol freeze media were transferred directly to recipients under tropical conditions and achieved a pregnancy rate of 50 percent. In a survey of the North American embryo transfer industry, embryo transfer success rates from direct transfer of embryos were as good as to those achieved with glycerol. Moreover, in 2011, more than 95% of frozen-thawed embryos were transferred by Direct Transfer.
References
External links
How embryo transfer works as part of fertility treatment
The blastocyst transfer process – a form of embryo transfer
One at a time website – benefits of Single Embryo Transfer
Fertility medicine
In vitro fertilisation
Cryobiology
Fertility
Theriogenology | Embryo transfer | [
"Physics",
"Chemistry",
"Biology"
] | 4,386 | [
"Biochemistry",
"Physical phenomena",
"Phase transitions",
"Cryobiology"
] |
1,401,020 | https://en.wikipedia.org/wiki/Christoffel%20symbols | In mathematics and physics, the Christoffel symbols are an array of numbers describing a metric connection. The metric connection is a specialization of the affine connection to surfaces or other manifolds endowed with a metric, allowing distances to be measured on that surface. In differential geometry, an affine connection can be defined without reference to a metric, and many additional concepts follow: parallel transport, covariant derivatives, geodesics, etc. also do not require the concept of a metric. However, when a metric is available, these concepts can be directly tied to the "shape" of the manifold itself; that shape is determined by how the tangent space is attached to the cotangent space by the metric tensor. Abstractly, one would say that the manifold has an associated (orthonormal) frame bundle, with each "frame" being a possible choice of a coordinate frame. An invariant metric implies that the structure group of the frame bundle is the orthogonal group . As a result, such a manifold is necessarily a (pseudo-)Riemannian manifold. The Christoffel symbols provide a concrete representation of the connection of (pseudo-)Riemannian geometry in terms of coordinates on the manifold. Additional concepts, such as parallel transport, geodesics, etc. can then be expressed in terms of Christoffel symbols.
In general, there are an infinite number of metric connections for a given metric tensor; however, there is a unique connection that is free of torsion, the Levi-Civita connection. It is common in physics and general relativity to work almost exclusively with the Levi-Civita connection, by working in coordinate frames (called holonomic coordinates) where the torsion vanishes. For example, in Euclidean spaces, the Christoffel symbols describe how the local coordinate bases change from point to point.
At each point of the underlying -dimensional manifold, for any local coordinate system around that point, the Christoffel symbols are denoted for . Each entry of this array is a real number. Under linear coordinate transformations on the manifold, the Christoffel symbols transform like the components of a tensor, but under general coordinate transformations (diffeomorphisms) they do not. Most of the algebraic properties of the Christoffel symbols follow from their relationship to the affine connection; only a few follow from the fact that the structure group is the orthogonal group (or the Lorentz group for general relativity).
Christoffel symbols are used for performing practical calculations. For example, the Riemann curvature tensor can be expressed entirely in terms of the Christoffel symbols and their first partial derivatives. In general relativity, the connection plays the role of the gravitational force field with the corresponding gravitational potential being the metric tensor. When the coordinate system and the metric tensor share some symmetry, many of the are zero.
The Christoffel symbols are named for Elwin Bruno Christoffel (1829–1900).
Note
The definitions given below are valid for both Riemannian manifolds and pseudo-Riemannian manifolds, such as those of general relativity, with careful distinction being made between upper and lower indices (contra-variant and co-variant indices). The formulas hold for either sign convention, unless otherwise noted.
Einstein summation convention is used in this article, with vectors indicated by bold font. The connection coefficients of the Levi-Civita connection (or pseudo-Riemannian connection) expressed in a coordinate basis are called Christoffel symbols.
Preliminary definitions
Given a manifold , an atlas consists of a collection of charts for each open cover . Such charts allow the standard vector basis on to be pulled back to a vector basis on the tangent space of . This is done as follows. Given some arbitrary real function , the chart allows a gradient to be defined:
This gradient is commonly called a pullback because it "pulls back" the gradient on to a gradient on . The pullback is independent of the chart . In this way, the standard vector basis on pulls back to a standard ("coordinate") vector basis on . This is called the "coordinate basis", because it explicitly depends on the coordinates on . It is sometimes called the "local basis".
This definition allows a common abuse of notation. The were defined to be in one-to-one correspondence with the basis vectors on . The notation serves as a reminder that the basis vectors on the tangent space came from a gradient construction. Despite this, it is common to "forget" this construction, and just write (or rather, define) vectors on such that . The full range of commonly used notation includes the use of arrows and boldface to denote vectors:
where is used as a reminder that these are defined to be equivalent notation for the same concept. The choice of notation is according to style and taste, and varies from text to text.
The coordinate basis provides a vector basis for vector fields on . Commonly used notation for vector fields on include
The upper-case , without the vector-arrow, is particularly popular for index-free notation, because it both minimizes clutter and reminds that results are independent of the chosen basis, and, in this case, independent of the atlas.
The same abuse of notation is used to push forward one-forms from to . This is done by writing or or . The one-form is then . This is soldered to the basis vectors as . Note the careful use of upper and lower indexes, to distinguish contravarient and covariant vectors.
The pullback induces (defines) a metric tensor on . Several styles of notation are commonly used:
where both the centerdot and the angle-bracket denote the scalar product. The last form uses the tensor , which is understood to be the "flat-space" metric tensor. For Riemannian manifolds, it is the Kronecker delta . For pseudo-Riemannian manifolds, it is the diagonal matrix having signature . The notation serves as a reminder that pullback really is a linear transform, given as the gradient, above. The index letters live in while the index letters live in the tangent manifold.
The matrix inverse of the metric tensor is given by
This is used to define the dual basis:
Some texts write for , so that the metric tensor takes the particularly beguiling form . This is commonly done so that the symbol can be used unambiguously for the vierbein.
Definition in Euclidean space
In Euclidean space, the general definition given below for the Christoffel symbols of the second kind can be proven to be equivalent to:
Christoffel symbols of the first kind can then be found via index lowering:
Rearranging, we see that (assuming the partial derivative belongs to the tangent space, which cannot occur on a non-Euclidean curved space):
In words, the arrays represented by the Christoffel symbols track how the basis changes from point to point. If the derivative does not lie on the tangent space, the right expression is the projection of the derivative over the tangent space (see covariant derivative below). Symbols of the second kind decompose the change with respect to the basis, while symbols of the first kind decompose it with respect to the dual basis. In this form, it is easy to see the symmetry of the lower or last two indices:
and
from the definition of and the fact that partial derivatives commute (as long as the manifold and coordinate system are well behaved).
The same numerical values for Christoffel symbols of the second kind also relate to derivatives of the dual basis, as seen in the expression:
which we can rearrange as:
General definition
The Christoffel symbols come in two forms: the first kind, and the second kind. The definition of the second kind is more basic, and thus is presented first.
Christoffel symbols of the second kind (symmetric definition)
The Christoffel symbols of the second kind are the connection coefficients—in a coordinate basis—of the Levi-Civita connection.
In other words, the Christoffel symbols of the second kind (sometimes or ) are defined as the unique coefficients such that
where is the Levi-Civita connection on taken in the coordinate direction (i.e., ) and where is a local coordinate (holonomic) basis. Since this connection has zero torsion, and holonomic vector fields commute (i.e. ) we have
Hence in this basis the connection coefficients are symmetric:
For this reason, a torsion-free connection is often called symmetric.
The Christoffel symbols can be derived from the vanishing of the covariant derivative of the metric tensor :
As a shorthand notation, the nabla symbol and the partial derivative symbols are frequently dropped, and instead a semicolon and a comma are used to set off the index that is being used for the derivative. Thus, the above is sometimes written as
Using that the symbols are symmetric in the lower two indices, one can solve explicitly for the Christoffel symbols as a function of the metric tensor by permuting the indices and resumming:
where is the inverse of the matrix , defined as (using the Kronecker delta, and Einstein notation for summation) . Although the Christoffel symbols are written in the same notation as tensors with index notation, they do not transform like tensors under a change of coordinates.
Contraction of indices
Contracting the upper index with either of the lower indices (those being symmetric) leads to
where is the determinant of the metric tensor. This identity can be used to evaluate divergence of vectors.
Christoffel symbols of the first kind
The Christoffel symbols of the first kind can be derived either from the Christoffel symbols of the second kind and the metric,
or from the metric alone,
As an alternative notation one also finds
It is worth noting that .
Connection coefficients in a nonholonomic basis
The Christoffel symbols are most typically defined in a coordinate basis, which is the convention followed here. In other words, the name Christoffel symbols is reserved only for coordinate (i.e., holonomic) frames. However, the connection coefficients can also be defined in an arbitrary (i.e., nonholonomic) basis of tangent vectors by
Explicitly, in terms of the metric tensor, this is
where are the commutation coefficients of the basis; that is,
where are the basis vectors and is the Lie bracket. The standard unit vectors in spherical and cylindrical coordinates furnish an example of a basis with non-vanishing commutation coefficients. The difference between the connection in such a frame, and the Levi-Civita connection is known as the contorsion tensor.
Ricci rotation coefficients (asymmetric definition)
When we choose the basis orthonormal: then . This implies that
and the connection coefficients become antisymmetric in the first two indices:
where
In this case, the connection coefficients are called the Ricci rotation coefficients.
Equivalently, one can define Ricci rotation coefficients as follows:
where is an orthonormal nonholonomic basis and its co-basis.
Transformation law under change of variable
Under a change of variable from to , Christoffel symbols transform as
where the overline denotes the Christoffel symbols in the coordinate system. The Christoffel symbol does not transform as a tensor, but rather as an object in the jet bundle. More precisely, the Christoffel symbols can be considered as functions on the jet bundle of the frame bundle of , independent of any local coordinate system. Choosing a local coordinate system determines a local section of this bundle, which can then be used to pull back the Christoffel symbols to functions on , though of course these functions then depend on the choice of local coordinate system.
For each point, there exist coordinate systems in which the Christoffel symbols vanish at the point. These are called (geodesic) normal coordinates, and are often used in Riemannian geometry.
There are some interesting properties which can be derived directly from the transformation law.
For linear transformation, the inhomogeneous part of the transformation (second term on the right-hand side) vanishes identically and then behaves like a tensor.
If we have two fields of connections, say and , then their difference is a tensor since the inhomogeneous terms cancel each other. The inhomogeneous terms depend only on how the coordinates are changed, but are independent of Christoffel symbol itself.
If the Christoffel symbol is unsymmetric about its lower indices in one coordinate system i.e., , then they remain unsymmetric under any change of coordinates. A corollary to this property is that it is impossible to find a coordinate system in which all elements of Christoffel symbol are zero at a point, unless lower indices are symmetric. This property was pointed out by Albert Einstein and Erwin Schrödinger independently.
Relationship to parallel transport and derivation of Christoffel symbols in Riemannian space
If a vector is transported parallel on a curve parametrized by some parameter on a Riemannian manifold, the rate of change of the components of the vector is given by
Now just by using the condition that the scalar product formed by two arbitrary vectors and is unchanged is enough to derive the Christoffel symbols. The condition is
which by the product rule expands to
Applying the parallel transport rule for the two arbitrary vectors and relabelling dummy indices and collecting the coefficients of (arbitrary), we obtain
This is same as the equation obtained by requiring the covariant derivative of the metric tensor to vanish in the General definition section. The derivation from here is simple. By cyclically permuting the indices in above equation, we can obtain two more equations and then linearly combining these three equations, we can express in terms of the metric tensor.
Relationship to index-free notation
Let and be vector fields with components and . Then the th component of the covariant derivative of with respect to is given by
Here, the Einstein notation is used, so repeated indices indicate summation over indices and contraction with the metric tensor serves to raise and lower indices:
Keep in mind that and that , the Kronecker delta. The convention is that the metric tensor is the one with the lower indices; the correct way to obtain from is to solve the linear equations .
The statement that the connection is torsion-free, namely that
is equivalent to the statement that—in a coordinate basis—the Christoffel symbol is symmetric in the lower two indices:
The index-less transformation properties of a tensor are given by pullbacks for covariant indices, and pushforwards for contravariant indices. The article on covariant derivatives provides additional discussion of the correspondence between index-free notation and indexed notation.
Covariant derivatives of tensors
The covariant derivative of a vector field with components is
By corollary, divergence of a vector can be obtained as
The covariant derivative of a covector field is
The symmetry of the Christoffel symbol now implies
for any scalar field, but in general the covariant derivatives of higher order tensor fields do not commute (see curvature tensor).
The covariant derivative of a type tensor field is
that is,
If the tensor field is mixed then its covariant derivative is
and if the tensor field is of type then its covariant derivative is
Contravariant derivatives of tensors
To find the contravariant derivative of a vector field, we must first transform it into a covariant derivative using the metric tensor
Applications
In general relativity
The Christoffel symbols find frequent use in Einstein's theory of general relativity, where spacetime is represented by a curved 4-dimensional Lorentz manifold with a Levi-Civita connection. The Einstein field equations—which determine the geometry of spacetime in the presence of matter—contain the Ricci tensor, and so calculating the Christoffel symbols is essential. Once the geometry is determined, the paths of particles and light beams are calculated by solving the geodesic equations in which the Christoffel symbols explicitly appear.
In classical (non-relativistic) mechanics
Let be the generalized coordinates and be the generalized velocities, then the kinetic energy for a unit mass is given by , where is the metric tensor. If , the potential function, exists then the contravariant components of the generalized force per unit mass are . The metric (here in a purely spatial domain) can be obtained from the line element . Substituting the Lagrangian into the Euler-Lagrange equation, we get
Now multiplying by , we get
When Cartesian coordinates can be adopted (as in inertial frames of reference), we have an Euclidean metrics, the Christoffel symbol vanishes, and the equation reduces to Newton's second law of motion. In curvilinear coordinates (forcedly in non-inertial frames, where the metrics is non-Euclidean and not flat), fictitious forces like the Centrifugal force and Coriolis force originate from the Christoffel symbols, so from the purely spatial curvilinear coordinates.
In Earth surface coordinates
Given a spherical coordinate system, which describes points on the Earth surface (approximated as an ideal sphere).
For a point x, is the distance to the Earth core (usually approximately the Earth radius). and are the latitude and longitude. Positive is the northern hemisphere. To simplify the derivatives, the angles are given in radians (where d sin(x)/dx = cos(x), the degree values introduce an additional factor of 360 / 2 pi).
At any location, the tangent directions are (up), (north) and (east) - you can also use indices 1,2,3.
The related metric tensor has only diagonal elements (the squared vector lengths). This is an advantage of the coordinate system and not generally true.
Now the necessary quantities can be calculated. Examples:
The resulting Christoffel symbols of the second kind then are (organized by the "derivative" index in a matrix):
These values show how the tangent directions (columns: , , ) change, seen from an outside perspective (e.g. from space), but given in the tangent directions of the actual location (rows: , , ).
As an example, take the nonzero derivatives by in , which corresponds to a movement towards north (positive dθ):
The new north direction changes by -R dθ in the up (R) direction. So the north direction will rotate downwards towards the center of the Earth.
Similarly, the up direction will be adjusted towards the north. The different lengths of and lead to a factor of 1/R .
Moving north, the east tangent vector changes its length (-tan(θ) on the diagonal), it will shrink (-tan(θ) dθ < 0) on the northern hemisphere, and increase (-tan(θ) dθ > 0) on the southern hemisphere.
These effects are maybe not apparent during the movement, because they are the adjustments that keep the measurements in the coordinates , , . Nevertheless, it can affect distances, physics equations, etc. So if e.g. you need the exact change of a magnetic field pointing approximately "south", it can be necessary to also correct your measurement by the change of the north direction using the Christoffel symbols to get the "true" (tensor) value.
The Christoffel symbols of the first kind show the same change using metric-corrected coordinates, e.g. for derivative by :
Lagrangian approach at finding a solution
In cylindrical coordinates, Cartesian and cylindrical polar coordinates exist as:
and
Cartesian points exist and Christoffel Symbols vanish as time passes, therefore, in cylindrical coordinates:
Spherical coordinates (using Lagrangian 2x2x2)
The Lagrangian can be evaluated as:
Hence,
can be rearranged to
By using the following geodesic equation:
The following can be obtained:
Lagrangian mechanics in geodesics (principles of least action in Christoffel symbols)
Incorporating Lagrangian mechanics and using the Euler–Lagrange equation, Christoffel symbols can be substituted into the Lagrangian to account for the geometry of the manifold. Christoffel symbols being calculated from the metric tensor, the equations can be derived and expressed from the principle of least action. When applying the Euler-Lagrange equation to a system of equations, the Lagrangian will include terms involving the Christoffel symbols, allowing the equation to act for the curvature which can determine the correct equations of motion for objects moving along geodesics.
Using the principle of least action from the Euler-Lagrange equation
The Euler-Lagrange equation is applied to a functional related to the path of an object in a spherical coordinate system,
Given and such that and
if
Reaches its minimum , where is a solution that can be found by solving the differential equation:
The differential equation provides the mathematical conditions that must be satisfied for this optimal path.
See also
Basic introduction to the mathematics of curved spacetime
Differentiable manifold
List of formulas in Riemannian geometry
Ricci calculus
Riemann–Christoffel tensor
Gauss–Codazzi equations
Example computation of Christoffel symbols
Notes
References
Riemannian geometry
Lorentzian manifolds
Mathematical notation
Mathematical physics
Connection (mathematics) | Christoffel symbols | [
"Physics",
"Mathematics"
] | 4,396 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics",
"nan"
] |
1,401,565 | https://en.wikipedia.org/wiki/CuSil | CuSil is a tradename for an alloy of 72% silver and 28% copper (± 1%) marketed by Morgan Advanced Materials. It is a eutectic alloy primarily used for vacuum brazing. CuSil should not be confused with the similarly named Cusil-ABA, which has a different composition (Ag – 63.0%, Cu – 35.25%, Ti – 1.75%)
References
Precious metal alloys
Brazing and soldering | CuSil | [
"Chemistry"
] | 95 | [
"Precious metal alloys",
"Alloys",
"Alloy stubs"
] |
25,756,089 | https://en.wikipedia.org/wiki/Deadweight%20tester | A dead weight tester apparatus uses weights to apply pressure to a fluid for checking the accuracy of readings from a pressure gauge. A dead weight tester (DWT) is a calibration standard method that uses a piston cylinder on which a load is placed to make an equilibrium with an applied pressure underneath the piston. Deadweight testers are secondary standards which means that the pressure measured by a deadweight tester is defined through other quantities: length, mass and time.
Typically deadweight testers are used in to calibrate pressure measuring devices.
Formula
The formula on which the design of a DWT is based basically is expressed as follows :
where :
Piston cylinder design
In general there are three different kind of DWT's divided by the medium which is measured and the lubricant which is used for its measuring element :
gas operated gas lubricated PCU's
gas operated oil lubricated PCU's
oil operated oil lubricated PCU's
See also
Blaise Pascal
Pascal (unit)
Calibration
Force gauge
Piezometer
Pressure measurement
Pressure sensor
Vacuum engineering
References
Measuring instruments | Deadweight tester | [
"Technology",
"Engineering"
] | 226 | [
"Measuring instruments"
] |
25,759,115 | https://en.wikipedia.org/wiki/Endre%20Berner | Endre Qvie Berner (24 September 1893 – 30 January 1983) was a Norwegian organic chemist, author and educator.
Background
He was born in Stavanger as a son of businessperson Endre Qvie Berner, Sr. (1853–1925) and his wife Anna Marie Gjemre (1875–1958). He worked at a workshop after finishing middle school, and enrolled in machinery studies at Bergen Technical School in 1911, but switched to chemistry at the Norwegian Institute of Technology in 1913. He graduated in 1918, and was then hired as research assistant of his advisor Claus Nissen Riiber. In 1922 he was promoted to docent. He studied in Munich (with Richard Willstätter and Heinrich Otto Wieland) in 1922–1923 and 1928, and in Birmingham (with Walter Haworth) in 1929. He took the doctorate in 1926 with the thesis A Contribution to the Thermochemistry of Organic Compounds.
Career
In 1934 he was appointed as professor at the University of Oslo. He is well known in the Nordic countries for his textbook Lærebok i organisk kjemi. The first modern Norwegian textbook in organic chemistry, it was first released in 1942 and then re-released several times, the last in 1964. The 1958 edition became known for introducing new Norwegian-language names of several chemical elements: hydrogen, nitrogen, karbon (carbon) og oksygen (oxygen).
During the occupation of Norway by Nazi Germany, his academic career was interrupted. When the Nazi authorities were about to change the rules for admission to the university in autumn 1943, a protest ensued. In retaliation, the authorities arrested 11 staff, 60 male students and 10 female students. The staff Johannes Andenæs, Eiliv Skard, Johan Christian Schreiner, Harald Krabbe Schjelderup, Anatol Heintz, Odd Hassel, Ragnar Frisch, Carl Jacob Arnholm, Bjørn Føyn and Endre Berner were sent to Grini concentration camp. Berner was first incarcerated at Berg concentration camp from 22 November 1943, then at Grini until 24 December 1944.
After the war Berner continued as professor at the University of Oslo until 1962, except for a stay at the Imperial College London from 1954 to 1955. He was also active as a professor emeritus until his death. He was elected as a member of the Royal Norwegian Society of Sciences and Letters in 1927, of the Norwegian Academy of Science and Letters in 1933 and of the Society of Chemical Industry in 1951. In 1959, he earned the Nansen medal for Outstanding Research and in 1969 he was decorated with the Order of St. Olav. He was the president of the Norwegian Chemical Society from 1946 to 1950, having co-founded the Trondheim branch of the society, and ultimately received honorary membership.
Personal life
He was married twice, 1) 1922 with Nathalia Adelaide Weidemann (1896-1930); 2) 1935 with Erna Gay (1909–2003). He died during 1983 in Oslo and was buried at Vestre gravlund.
References
1893 births
1983 deaths
People from Stavanger
Norwegian chemists
Norwegian educators
Organic chemists
Norwegian expatriates in the United Kingdom
Norwegian expatriates in Germany
Norwegian Institute of Technology alumni
Academic staff of the Norwegian Institute of Technology
Academics of Imperial College London
Academic staff of the University of Oslo
Norwegian resistance members
Berg concentration camp survivors
Grini concentration camp survivors
Royal Norwegian Society of Sciences and Letters
Members of the Norwegian Academy of Science and Letters
Recipients of the St. Olav's Medal
Burials at Vestre gravlund | Endre Berner | [
"Chemistry"
] | 739 | [
"Organic chemists"
] |
25,759,979 | https://en.wikipedia.org/wiki/Finite%20element%20limit%20analysis | A finite element limit analysis (FELA) uses optimisation techniques to directly compute the upper or lower bound plastic collapse load (or limit load) for a mechanical system rather than time stepping to a collapse load, as might be undertaken with conventional non-linear finite element techniques. The problem may be formulated in either a kinematic or equilibrium form.
The technique has been used most significantly in the field of soil mechanics for the determination of collapse loads for geotechnical problems (e.g. slope stability analysis). An alternative technique which may be used to undertake similar direct plastic collapse computations using optimization is Discontinuity layout optimization.
Software for finite element limit analysis
OptumG2 (2014-) General purpose software for 2D geotechnical applications.
OptumG3 (2017-) General purpose software for 3D geotechnical applications.
See also
Limit analysis
References
Further reading
Kumar, Jyant, and Debasis Mohapatra. "Lower-bound finite elements limit analysis for Hoek-Brown materials using semidefinite programming." Journal of Engineering Mechanics 143.9 (2017): 04017077.
Makrodimopoulos, A., and C. M. Martin. "Lower bound limit analysis of cohesive‐frictional materials using second‐order cone programming." International Journal for Numerical Methods in Engineering 66.4 (2006): 604-634.
Kumar, Jyant, and Vishwas N. Khatri. "Bearing capacity factors of circular foundations for a general c–ϕ soil using lower bound finite elements limit analysis." International Journal for Numerical and Analytical Methods in Geomechanics 35.3 (2011): 393-405.
Tang, Chong, Kim-Chuan Toh, and Kok-Kwang Phoon. "Axisymmetric lower-bound limit analysis using finite elements and second-order cone programming." Journal of Engineering Mechanics 140.2 (2013): 268-278.
Kumar, Jyant, and Obaidur Rahaman. "Vertical uplift resistance of horizontal plate anchors for eccentric and inclined loads." Canadian Geotechnical Journal(2018).
Mohapatra D, Kumar J. Collapse loads for rectangular foundations by three‐dimensional upper bound limit analysis using radial point interpolation method. Int J Numer Anal Methods Geomech. 2018;1–20. https://doi.org/10.1002/nag.2885
Structural analysis
Soil mechanics | Finite element limit analysis | [
"Physics",
"Engineering"
] | 516 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Structural analysis",
"Soil mechanics",
"Mechanical engineering",
"Aerospace engineering"
] |
6,770,109 | https://en.wikipedia.org/wiki/Tacamahac | Tacamahac is the name of medicinal resins, now little used, obtained from several plant sources including Calophyllum tacamahaca and Calophyllum inophyllum.
The word has sometimes been regarded, apparently wrongly, as a synonym of balm of Gilead.
External links
U. S. Dispensatory, 1918 (Henriette's Herbal Homepage)
Resins | Tacamahac | [
"Physics"
] | 82 | [
"Amorphous solids",
"Unsolved problems in physics",
"Resins"
] |
6,771,335 | https://en.wikipedia.org/wiki/Adastral%20Park | Adastral Park is a science campus based on part of the old Royal Air Force Station at Martlesham Heath, near Ipswich in the English county of Suffolk.
When the site opened it was known as the Post Office Research Station, but it was subsequently renamed BT Research Laboratories or BT Labs and later Adastral Park to reflect an expansion in the organisations and activities co-located with BT Labs at the campus.
History
The original laboratories (when BT was part of the Post Office) were first opened by Elizabeth II in 1975. Prior to this the Post Office Research Station was at Dollis Hill in northwest London. Martlesham Heath was chosen as the site for a research facility because the surrounding countryside was relatively flat and therefore ideal for testing the radio-based communication systems in vogue at the time.
Initially, research was carried out into postal sorting and delivery technology, and telecommunications. After the Post Office was split apart and prior to British Telecom's privatisation in the early 1980s, the research concentrated on telecommunications.
In keeping with the stellar theme of the site name, buildings on site are named after stars or constellations (an example being the Main Laboratory Block now named the Orion building). The Orion building is easily recognisable from the nearby A12 road with its radio tower, now named Pegasus tower, dominating the skyline.
The change to the current name occurred in the late 1990s with the aim of turning the site into a high-technology business park, no longer exclusively for the use of BT. The name was created by Stewart Davies, the CEO of the BT business (BT Exact Technologies) headquartered at the site at that time. It is derived from the motto of the Royal Air Force—per ardua ad astra ("through adversity to the stars"). The Royal Air Force were prior residents of the site, as RAF Martlesham Heath. Experimental aircraft test flights flew from the airfield and the name was meant to reflect this. In March 2001, University College London, Faculty of Engineering Sciences, chose Adastral Park to set up the first-ever postgraduate research and teaching centre on an industrial campus, which was housed there until 2009. During the transformation of the business park, many of the old buildings were removed and car parks were moved to the perimeter of the site, with the centre made into open parkland with a water feature to provide a 'park' feel to the complex. The site accommodates approximately 4,000 people.
Current use
Companies based at Adastral Park besides BT (BT Applied Research) include:
Openreach
F5 Networks
Juniper Networks
GENBAND
Maly IT Solutions
Cisco
Coderus.com
CommsUnite
Fujitsu
Huawei
O2 plc
Arqiva
CIP
Milner Strategic Marketing
There is also a satellite earth station operated by Arqiva; the location was chosen for the visibility of satellites on the eastern horizon. In 2018 there were 98 high-tech companies at the site.
Peregrine falcons
BT worked with the Hawk and Owl Trust to set up a nesting box for a pair of peregrine falcons. These produced two chicks in 2019, and in 2020 a YouTube channel was set up and three chicks were produced.
Adastral New Town
Over many years, BT has put forward various proposals and plans to expand activities at the business park. In June 2001, a framework for expanding the business park was created, but it was not linked to building any residential housing on the site. At the time BT forecast 3000 to 3500 additional jobs by about 2010. In 2007, BT said that they could develop the business park without the need for the income from selling land for housing.
In 2006, Suffolk Coastal District Council (SCDC) rejected a planning application for 120 log cabins on a site next to Waldringfield Road. The rejection was on the grounds that it was too near the Area of Outstanding Natural Beauty amongst other reasons, and would result in an unacceptable increase in visitor numbers to a sensitive areas. BT initially objected on the grounds that it would interfere with their radio test area, although BT subsequently withdrew their objection provided the developer created a protective earth bund, rejected by SCDC. BT subsequently lodged a planning application for 2000 houses to be built. At its closest the site comes within of an Area of Outstanding Natural Beauty, and there are several Site of Special Scientific Interest close by, such as Newbourne Springs. In April 2018, SCDC (subsequently merged with a neighbouring council to become East Suffolk District Council) gave outline permission for the development, which is now named Brightwell Lakes.
See also
BT Research
Post Office Research Station
References
External links
Adastral Park home page
Connected Earth Museum on the origin of BT Laboratories, Martlesham
Coverage from theregister.co.uk on domain grab
Further coverage on domain grab
No to Adastral New Town
Press coverage over Adastral New Town
British Telecom buildings and structures
Buildings and structures in Suffolk
Engineering research institutes
History of telecommunications in the United Kingdom
Research institutes in Suffolk
Science parks in the United Kingdom | Adastral Park | [
"Engineering"
] | 1,015 | [
"Engineering research institutes"
] |
2,834,500 | https://en.wikipedia.org/wiki/Pentyl%20group | Pentyl is a five-carbon alkyl group or substituent with chemical formula -C5H11. It is the substituent form of the alkane pentane.
In older literature, the common non-systematic name amyl was often used for the pentyl group. Conversely, the name pentyl was used for several five-carbon branched alkyl groups, distinguished by various prefixes. The nomenclature has now reversed, with "amyl" being more often used to refer to the terminally branched group also called isopentyl, as in amobarbital.
A cyclopentyl group is a ring with the formula -C5H9.
The name is also used for the pentyl radical, a pentyl group as an isolated molecule. This free radical is only observed in extreme conditions. Its formula is often written "•" or "• " to indicate that it has one unsatisfied valence bond. Radicals like pentyl are reactive, they react with neighboring atoms or molecules (like oxygen, water, etc.)
Older "pentyl" groups
The following names are still used sometimes:
Pentyl radical
The free radical pentyl was studied by J. Pacansky and A. Gutierrez in 1983. The radical was obtained by exposing bishexanoyl peroxide trapped in frozen argon to ultraviolet light, that caused its decomposition into two carbon dioxide () molecules and two pentyl radicals.
Examples
Pentanol
Pentyl pentanoate
Amylamine
Amyl acetate
Amyl alcohol
Amylmetacresol
Isoamyl acetate
Isoamyl alcohol
References
Alkyl groups | Pentyl group | [
"Chemistry"
] | 352 | [
"Substituents",
"Alkyl groups"
] |
2,834,719 | https://en.wikipedia.org/wiki/Features%2C%20events%2C%20and%20processes | Features, Events, and Processes (FEP) are terms used in the fields of radioactive waste management, carbon capture and storage, and hydraulic fracturing to define relevant scenarios for safety assessment studies. For a radioactive waste repository, features would include the characteristics of the site, such as the type of soil or geological formation the repository is to be built on or under. Events would include things that may or will occur in the future, like, e.g., glaciations, droughts, earthquakes, or formation of faults. Processes are things that are ongoing, such as the erosion or subsidence of the landform where the site is located on, or near.
Several catalogues of FEP's are publicly available, a.o., this one elaborated for the NEA Clay Club dealing with the disposal of radioactive waste in deep clay formations,
and those compiled for deep crystalline rocks (granite) by Svensk Kärnbränslehantering AB, SKB, the Swedish Nuclear Fuel and Waste Management Company.
References
External links
Free PDF reports are accessible from here:
Nuclear Energy Agency
SKB (Sweden) web site
Posiva (Finland) Databank web site
NAGRA (Switzerland) web site
Radioactive waste
Nuclear safety and security | Features, events, and processes | [
"Physics",
"Chemistry",
"Technology"
] | 255 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Hazardous waste",
"Radioactivity",
"Nuclear physics",
"Environmental impact of nuclear power",
"Radioactive waste"
] |
2,835,186 | https://en.wikipedia.org/wiki/Osmotic%20concentration | Osmotic concentration, formerly known as osmolarity, is the measure of solute concentration, defined as the number of osmoles (Osm) of solute per litre (L) of solution (osmol/L or Osm/L). The osmolarity of a solution is usually expressed as Osm/L (pronounced "osmolar"), in the same way that the molarity of a solution is expressed as "M" (pronounced "molar").
Whereas molarity measures the number of moles of solute per unit volume of solution, osmolarity measures the number of particles on dissociation of osmotically active material (osmoles of solute particles) per unit volume of solution. This value allows the measurement of the osmotic pressure of a solution and the determination of how the solvent will diffuse across a semipermeable membrane (osmosis) separating two solutions of different osmotic concentration.
Unit
The unit of osmotic concentration is the osmole. This is a non-SI unit of measurement that defines the number of moles of solute that contribute to the osmotic pressure of a solution. A milliosmole (mOsm) is one thousandth of an osmole. A microosmole (μOsm) (also spelled micro-osmole) is one millionth of an osmole.
Types of solutes
Osmolarity is distinct from molarity because it measures osmoles of solute particles rather than moles of solute. The distinction arises because some compounds can dissociate in solution, whereas others cannot.
Ionic compounds, such as salts, can dissociate in solution into their constituent ions, so there is not a one-to-one relationship between the molarity and the osmolarity of a solution. For example, sodium chloride (NaCl) dissociates into Na+ and Cl− ions. Thus, for every 1 mole of NaCl in solution, there are 2 osmoles of solute particles (i.e., a 1 mol/L NaCl solution is a 2 osmol/L NaCl solution). Both sodium and chloride ions affect the osmotic pressure of the solution.
[Note: NaCl does not dissociate completely in water at standard temperature and pressure, so the solution will be composed of Na+ ions, Cl- ions, and some NaCl molecules, with actual osmolality = Na+ concentration x 1.75]
Another example is magnesium chloride (MgCl2), which dissociates into Mg2+ and 2Cl− ions. For every 1 mole of MgCl2 in the solution, there are 3 osmoles of solute particles.
Nonionic compounds do not dissociate, and form only 1 osmole of solute per 1 mole of solute. For example, a 1 mol/L solution of glucose is 1 osmol/L.
Multiple compounds may contribute to the osmolarity of a solution. For example, a 3 Osm solution might consist of 3 moles glucose, or 1.5 moles NaCl, or 1 mole glucose + 1 mole NaCl, or 2 moles glucose + 0.5 mole NaCl, or any other such combination.
Definition
The osmolarity of a solution, given in osmoles per liter (osmol/L) is calculated from the following expression:
where
is the osmotic coefficient, which accounts for the degree of non-ideality of the solution. In the simplest case it is the degree of dissociation of the solute. Then, is between 0 and 1 where 1 indicates 100% dissociation. However, can also be larger than 1 (e.g. for sucrose). For salts, electrostatic effects cause to be smaller than 1 even if 100% dissociation occurs (see Debye–Hückel equation);
is the number of particles (e.g. ions) into which a molecule dissociates. For example: glucose has of 1, while NaCl has of 2;
is the molar concentration of the solute;
the index represents the identity of a particular solute.
Osmolarity can be measured using an osmometer which measures colligative properties, such as Freezing-point depression, Vapor pressure, or Boiling-point elevation.
Osmolarity vs. tonicity
Osmolarity and tonicity are related but distinct concepts. Thus, the terms ending in -osmotic (isosmotic, hyperosmotic, hypoosmotic) are not synonymous with the terms ending in -tonic (isotonic, hypertonic, hypotonic). The terms are related in that they both compare the solute concentrations of two solutions separated by a membrane. The terms are different because osmolarity takes into account the total concentration of penetrating solutes and non-penetrating solutes, whereas tonicity takes into account the total concentration of non-freely penetrating solutes only.
Penetrating solutes can diffuse through the cell membrane, causing momentary changes in cell volume as the solutes "pull" water molecules with them. Non-penetrating solutes cannot cross the cell membrane; therefore, the movement of water across the cell membrane (i.e., osmosis) must occur for the solutions to reach equilibrium.
A solution can be both hyperosmotic and isotonic. For example, the intracellular fluid and extracellular can be hyperosmotic, but isotonic – if the total concentration of solutes in one compartment is different from that of the other, but one of the ions can cross the membrane (in other words, a penetrating solute), drawing water with it, thus causing no net change in solution volume.
In medicine
Plasma osmolarity vs. osmolality
Plasma osmolarity, the osmolarity of blood plasma, can be calculated from plasma osmolality by the following equation:
where:
is the density of the solution in g/ml, which is 1.025 g/ml for blood plasma.
is the (anhydrous) solute concentration in g/ml – not to be confused with the density of dried plasma
According to IUPAC, osmolality is the quotient of the negative natural logarithm of the rational activity of water and the molar mass of water, whereas osmolarity is the product of the osmolality and the mass density of water (also known as osmotic concentration).
In simpler terms, osmolality is an expression of solute osmotic concentration per mass of solvent, whereas osmolarity is per volume of solution (thus the conversion by multiplying with the mass density of solvent in solution (kg solvent/litre solution).
where is the molality of component .
Plasma osmolarity/osmolality is important for keeping proper electrolytic balance in the blood stream. Improper balance can lead to dehydration, alkalosis, acidosis or other life-threatening changes. Antidiuretic hormone (vasopressin) is partly responsible for this process by controlling the amount of water the body retains from the kidney when filtering the blood stream.
Hyperosmolarity and hypoosmolarity
A concentration of an osmatically active substance is said to be hyperosmolar if a high concentration causes a change in osmatic pressure in a tissue, organ, or system. Similarly, it is said to be hypoossmolar if the osmolarity, or osmatic concentration, is too low. For example, if the osmolarity of parenteral nutrition is too high, it can cause severe tissue damage. One example of a condition caused by hypoosmolarity is water intoxication.
See also
Molarity
Molality
Plasma osmolality
Tonicity
van 't Hoff factor
References
D. J. Taylor, N. P. O. Green, G. W. Stout Biological Science
External links
Online Serum Osmolarity/Osmolality calculator
Concentration
Amount of substance
Solutions | Osmotic concentration | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,722 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Chemical quantities",
"Amount of substance",
"Homogeneous chemical mixtures",
"Concentration",
"Solutions",
"Wikipedia categories named after physical quantities"
] |
2,835,834 | https://en.wikipedia.org/wiki/Signal%20velocity | The signal velocity is the speed at which a wave carries information. It describes how quickly a message can be communicated (using any particular method) between two separated parties. No signal velocity can exceed the speed of a light pulse in a vacuum (by special relativity).
Signal velocity is usually equal to group velocity (the speed of a short "pulse" or of a wave-packet's middle or "envelope"). However, in a few special cases (e.g., media designed to amplify the front-most parts of a pulse and then attenuate the back section of the pulse), group velocity can exceed the speed of light in vacuum, while the signal velocity will still be less than or equal to the speed of light in vacuum.
In electronic circuits, signal velocity is one member of a group of five closely related parameters. In these circuits, signals are usually treated as operating in TEM (Transverse ElectroMagnetic) mode. That is, the fields are perpendicular to the direction of transmission and perpendicular to each other. Given this presumption, the quantities: signal velocity, the product of dielectric constant and magnetic permeability, characteristic impedance, inductance of a structure, and capacitance of that structure, are all related such that if you know any two, you can calculate the rest. In a uniform medium if the permeability is constant, then variation of the signal velocity will be dependent only on variation of the dielectric constant.
In a transmission line, signal velocity is the reciprocal of the square root of the capacitance-inductance product, where inductance and capacitance are typically expressed as per-unit length. In circuit boards made of FR-4 material, the signal velocity is typically about six inches (15 cm) per nanosecond, or 6.562 ps/mm. In circuit boards made of Polyimide material, the signal velocity is typically about 16.3 cm per nanosecond or 6.146 ps/mm. In these boards, permeability is usually constant and dielectric constant often varies from location to location, causing variations in signal velocity. As data rates increase, these variations become a major concern for computer manufacturers.
where is the relative permittivity of the medium, is the relative permeability of the medium, and is the speed of light in vacuum. The approximation shown is used in many practical context because for most common materials .
See also
Dispersion (optics)
Front velocity
Phase velocity
Propagation delay
Time of flight
Velocity factor
Dielectric constant
References
Brillouin, Léon. Wave propagation and group velocity. Academic Press Inc., New York (1960).
Clayton R. Paul, Analysis of Multiconductor Transmission Lines. John Wiley & Sons., New York (1994)
Wave mechanics | Signal velocity | [
"Physics"
] | 581 | [
"Physical phenomena",
"Classical mechanics stubs",
"Classical mechanics",
"Waves",
"Wave mechanics"
] |
2,836,053 | https://en.wikipedia.org/wiki/Natural%20uranium | Natural uranium (NU or Unat) is uranium with the same isotopic ratio as found in nature. It contains 0.711% uranium-235, 99.284% uranium-238, and a trace of uranium-234 by weight (0.0055%). Approximately 2.2% of its radioactivity comes from uranium-235, 48.6% from uranium-238, and 49.2% from uranium-234.
Natural uranium can be used to fuel both low- and high-power nuclear reactors. Historically, graphite-moderated reactors and heavy water-moderated reactors have been fueled with natural uranium in the pure metal (U) or uranium dioxide (UO2) ceramic forms. However, experimental fuelings with uranium trioxide (UO3) and triuranium octaoxide (U3O8) have shown promise.
The 0.72% uranium-235 is not sufficient to produce a self-sustaining critical chain reaction in light water reactors or nuclear weapons; these applications must use enriched uranium. Nuclear weapons take a concentration of 90% uranium-235, and light water reactors require a concentration of roughly 3% uranium-235. Unenriched natural uranium is appropriate fuel for a heavy-water reactor, like a CANDU reactor.
On rare occasions, earlier in geologic history when uranium-235 was more abundant, uranium ore was found to have naturally engaged in fission, forming natural nuclear fission reactors. Uranium-235 decays at a faster rate (half-life of 700 million years) compared to uranium-238, which decays extremely slowly (half-life of 4.5 billion years). Therefore, a billion years ago, there was more than double the uranium-235 compared to now.
During the Manhattan Project, the name Tuballoy was used to refer to natural uranium in the refined condition; this term is still in occasional use. Uranium was also codenamed "X-Metal" during World War II. Similarly, enriched uranium was referred to as Oralloy (Oak Ridge alloy), and depleted uranium was referred to as Depletalloy (depleted alloy).
See also
List of uranium mines
Nuclear engineering
Nuclear fuel cycle
Nuclear physics
Nuclear chemistry
References
Design Parameters for a Natural Uranium Fueled Nuclear Reactor, C. M. Hopper et al., ORNL/TM-2002/240, November 2002.
External links
The evolution of CANDU fuel cycles
Uranium
Nuclear fuels
Nuclear materials | Natural uranium | [
"Physics"
] | 502 | [
"Materials",
"Nuclear materials",
"Matter"
] |
2,837,716 | https://en.wikipedia.org/wiki/Fretting | Fretting refers to wear and sometimes corrosion damage of loaded surfaces in contact while they encounter small oscillatory movements tangential to the surface. Fretting is caused by adhesion of contact surface asperities, which are subsequently broken again by the small movement. This breaking causes wear debris to be formed.
If the debris and/or surface subsequently undergo chemical reaction, i.e., mainly oxidation, the mechanism is termed fretting corrosion. Fretting degrades the surface, leading to increased surface roughness and micropits, which reduces the fatigue strength of the components.
The amplitude of the relative sliding motion is often in the order of micrometers to millimeters, but can be as low as 3 nanometers.
Typically fretting is encountered in shrink fits, bearing seats, bolted parts, splines, and dovetail connections.
Materials
Steel
Fretting damage in steel can be identified by the presence of a pitted surface and fine 'red' iron oxide dust resembling cocoa powder. Strictly this debris is not 'rust' as its production requires no water. The particles are much harder than the steel surfaces in contact, so abrasive wear is inevitable; however, particulates are not required to initiate fret.
Aluminium
Fretting in Aluminium causes black debris to be present in the contact area due to the fine oxide particles.
Products affected
Fretting examples include wear of drive splines on driveshafts, wheels at the lug bolt interface, and cylinder head gaskets subject to differentials in thermal expansion coefficients.
There is currently a focus on fretting research in the aerospace industry. The dovetail blade-root connection and the spline coupling of gas turbine aero engines experience fretting.
Another example in which fretting corrosion may occur are the pitch bearings of modern wind turbines, which operate under oscillation motion to control the power and loads of the turbine.
Fretting can also occur between reciprocating elements in the human body. Especially implants, for example hip implants, are often affected by fretting effects.
Fretting electrical/electronic connectors
Source:
Fretting also occurs on virtually all electrical connectors subject to motion (e.g. a printed circuit board connector plugged into a backplane, i.e. SOSA/VPX). Commonly most board to board (B2B) electrical connectors are especially vulnerable if there is any relative motion present between the mating connectors. A mechanically rigid connection system is required to hold both halves of a B2B motionless (often impossible). Wire to board (W2B) connectors tend to be immune to fretting because the wire half of the connector acts as a spring absorbing relative motion that would otherwise transfer to the contact surfaces of the W2B connector. Very few exotic B2B connectors exist that address fretting by: 1) incorporating springs into the individual contacts or 2) using a Chinese finger trap design to greatly increase the contact area. A connector design that contacts all 4-sides of a square pin instead of just one or 1 or 2 can delay the inevitable fretting some amount. Keeping contacts clean and lubricated also offers some longevity.
Contact fretting can change the impedance of a B2B connector from milliohms to ohms in just minutes when vibration is present. The relatively soft and thin gold plating used on most high quality electrical connectors is quickly worn through exposing the underlying alloy metals and with fretting debris the impedance rapidly increases. Somewhat counterintuitively, high contact forces on the mated connector pair (thought to help lower impedance and increase reliability) can actually make the rate of fretting even worse.
Fretting in rolling element bearings
In rolling element bearings fretting may occur when the bearings are operating in an oscillating motion. Examples of applications are blade bearings in wind turbines, helicopter rotor pitch bearings, and bearings in robots. If the bearing movement is limited to small motions the damage caused may be called fretting or false brinelling depending on mechanism encountered. The main difference is that false brinelling occurs under lubricated and fretting under dry contact conditions. Between false brinelling and fretting corrosion, a time-dependent relation has been proposed.
Fretting fatigue
Fretting decreases fatigue strength of materials operating under cycling stress. This can result in fretting fatigue, whereby fatigue cracks can initiate in the fretting zone. Afterwards, the crack propagates into the material. Lap joints, common on airframe surfaces, are a prime location for fretting corrosion. This is also known as frettage or fretting corrosion.
Factors affecting fretting
Fretting resistance is not an intrinsic property of a material, or even of a material couple. There are several factors affecting fretting behavior of a contact:
Contact load
Sliding amplitude
Number of cycles
Temperature
Relative humidity
Inertness of materials
Corrosion and resulting motion-triggered contact insufficiency
Mitigation
The fundamental way to prevent fretting is to design for no relative motion of the surfaces at the contact. Surface roughness plays an important role as fretting normally occurs by the contact of the asperities of the mating surfaces. Lubricants are often employed to mitigate fretting because they reduce friction and inhibit oxidation. This may however, also cause the opposite effect as a lower coefficient of friction may lead to more movement. Thus, a solution must be carefully considered and tested.
In the aviation industry, coatings are applied to cause a harder surface and/or influence the friction coefficient.
Soft materials often exhibit higher susceptibility to fretting than hard materials of a similar type. The hardness ratio of the two sliding materials also has an effect on fretting wear. However, softer materials such as polymers can show the opposite effect when they capture hard debris which becomes embedded in their bearing surfaces. They then act as a very effective abrasive agent, wearing down the harder metal with which they are in contact.
See also
References
External links
Fretting and Its Insidious Effects, by EPI Inc.
Assessment Of Cold Welding Between Separable Contact Surfaces Due To Impact And Fretting Under Vacuum
Corrosion
Materials degradation
Tribology | Fretting | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,292 | [
"Tribology",
"Metallurgy",
"Materials science",
"Surface science",
"Corrosion",
"Electrochemistry",
"Mechanical engineering",
"Materials degradation"
] |
2,838,129 | https://en.wikipedia.org/wiki/Cauchy%27s%20theorem%20%28group%20theory%29 | In mathematics, specifically group theory, Cauchy's theorem states that if is a finite group and is a prime number dividing the order of (the number of elements in ), then contains an element of order . That is, there is in such that is the smallest positive integer with = , where is the identity element of . It is named after Augustin-Louis Cauchy, who discovered it in 1845.
The theorem is a partial converse to Lagrange's theorem, which states that the order of any subgroup of a finite group divides the order of . In general, not every divisor of arises as the order of a subgroup of . Cauchy's theorem states that for any prime divisor of the order of , there is a subgroup of whose order is —the cyclic group generated by the element in Cauchy's theorem.
Cauchy's theorem is generalized by Sylow's first theorem, which implies that if is the maximal power of dividing the order of , then has a subgroup of order (and using the fact that a -group is solvable, one can show that has subgroups of order for any less than or equal to ).
Statement and proof
Many texts prove the theorem with the use of strong induction and the class equation, though considerably less machinery is required to prove the theorem in the abelian case. One can also invoke group actions for the proof.
Proof 1
We first prove the special case that where is abelian, and then the general case; both proofs are by induction on = ||, and have as starting case = which is trivial because any non-identity element now has order . Suppose first that is abelian. Take any non-identity element , and let be the cyclic group it generates. If divides ||, then ||/ is an element of order . If does not divide ||, then it divides the order [:] of the quotient group /, which therefore contains an element of order by the inductive hypothesis. That element is a class for some in , and if is the order of in , then = in gives () = in /, so divides ; as before / is now an element of order in , completing the proof for the abelian case.
In the general case, let be the center of , which is an abelian subgroup. If divides ||, then contains an element of order by the case of abelian groups, and this element works for as well. So we may assume that does not divide the order of . Since does divide ||, and is the disjoint union of and of the conjugacy classes of non-central elements, there exists a conjugacy class of a non-central element whose size is not divisible by . But the class equation shows that size is [ : ()], so divides the order of the centralizer () of in , which is a proper subgroup because is not central. This subgroup contains an element of order by the inductive hypothesis, and we are done.
Proof 2
This proof uses the fact that for any action of a (cyclic) group of prime order , the only possible orbit sizes are 1 and , which is immediate from the orbit stabilizer theorem.
The set that our cyclic group shall act on is the set
of -tuples of elements of whose product (in order) gives the identity. Such a -tuple is uniquely determined by all its components except the last one, as the last element must be the inverse of the product of those preceding elements. One also sees that those elements can be chosen freely, so has ||−1 elements, which is divisible by .
Now from the fact that in a group if = then = , it follows that any cyclic permutation of the components of an element of again gives an element of . Therefore one can define an action of the cyclic group of order on by cyclic permutations of components, in other words in which a chosen generator of sends
.
As remarked, orbits in under this action either have size 1 or size . The former happens precisely for those tuples for which . Counting the elements of by orbits, and dividing by , one sees that the number of elements satisfying is divisible by . But = is one such element, so there must be at least other solutions for , and these solutions are elements of order . This completes the proof.
Applications
Cauchy's theorem implies a rough classification of all elementary abelian groups (groups whose non-identity elements all have equal, finite order). If is such a group, and has order , then must be prime, since otherwise Cauchy's theorem applied to the (finite) subgroup generated by produces an element of order less than . Moreover, every finite subgroup of has order a power of (including itself, if it is finite). This argument applies equally to -groups, where every element's order is a power of (but not necessarily every order is the same).
One may use the abelian case of Cauchy's Theorem in an inductive proof of the first of Sylow's theorems, similar to the first proof above, although there are also proofs that avoid doing this special case separately.
Notes
References
External links
Articles containing proofs
Augustin-Louis Cauchy
Theorems about finite groups | Cauchy's theorem (group theory) | [
"Mathematics"
] | 1,102 | [
"Articles containing proofs"
] |
2,838,343 | https://en.wikipedia.org/wiki/Calculus%20on%20Manifolds%20%28book%29 | Calculus on Manifolds: A Modern Approach to Classical Theorems of Advanced Calculus (1965) by Michael Spivak is a brief, rigorous, and modern textbook of multivariable calculus, differential forms, and integration on manifolds for advanced undergraduates.
Description
Calculus on Manifolds is a brief monograph on the theory of vector-valued functions of several real variables (f : Rn→Rm) and differentiable manifolds in Euclidean space. In addition to extending the concepts of differentiation (including the inverse and implicit function theorems) and Riemann integration (including Fubini's theorem) to functions of several variables, the book treats the classical theorems of vector calculus, including those of Cauchy–Green, Ostrogradsky–Gauss (divergence theorem), and Kelvin–Stokes, in the language of differential forms on differentiable manifolds embedded in Euclidean space, and as corollaries of the generalized Stokes theorem on manifolds-with-boundary. The book culminates with the statement and proof of this vast and abstract modern generalization of several classical results:
The cover of Calculus on Manifolds features snippets of a July 2, 1850 letter from Lord Kelvin to Sir George Stokes containing the first disclosure of the classical Stokes' theorem (i.e., the Kelvin–Stokes theorem).
Reception
Calculus on Manifolds aims to present the topics of multivariable and vector calculus in the manner in which they are seen by a modern working mathematician, yet simply and selectively enough to be understood by undergraduate students whose previous coursework in mathematics comprises only one-variable calculus and introductory linear algebra. While Spivak's elementary treatment of modern mathematical tools is broadly successful—and this approach has made Calculus on Manifolds a standard introduction to the rigorous theory of multivariable calculus—the text is also well known for its laconic style, lack of motivating examples, and frequent omission of non-obvious steps and arguments. For example, in order to state and prove the generalized Stokes' theorem on chains, a profusion of unfamiliar concepts and constructions (e.g., tensor products, differential forms, tangent spaces, pullbacks, exterior derivatives, cube and chains) are introduced in quick succession within the span of 25 pages. Moreover, careful readers have noted a number of nontrivial oversights throughout the text, including missing hypotheses in theorems, inaccurately stated theorems, and proofs that fail to handle all cases.
Other textbooks
A more recent textbook which also covers these topics at an undergraduate level is the text Analysis on Manifolds by James Munkres (366 pp.). At more than twice the length of Calculus on Manifolds, Munkres's work presents a more careful and detailed treatment of the subject matter at a leisurely pace. Nevertheless, Munkres acknowledges the influence of Spivak's earlier text in the preface of Analysis on Manifolds.
Spivak's five-volume textbook A Comprehensive Introduction to Differential Geometry states in its preface that Calculus on Manifolds serves as a prerequisite for a course based on this text. In fact, several of the concepts introduced in Calculus on Manifolds reappear in the first volume of this classic work in more sophisticated settings.
See also
Differentiable manifolds
Multilinear form
Footnotes
Notes
Citations
References
[An elementary approach to differential forms with an emphasis on concrete examples and computations]
[A general treatment of differential forms, differentiable manifolds, and selected applications to mathematical physics for advanced undergraduates]
[An undergraduate treatment of multivariable and vector calculus with coverage similar to Calculus on Manifolds, with mathematical ideas and proofs presented in greater detail]
[A unified treatment of linear and multilinear algebra, multivariable calculus, differential forms, and introductory algebraic topology for advanced undergraduates]
[An unorthodox though rigorous approach to differential forms that avoids many of the usual algebraic constructions]
[A brief, rigorous, and modern treatment of multivariable calculus, differential forms, and integration on manifolds for advanced undergraduates]
[A thorough account of differentiable manifolds at the graduate level; contains a more sophisticated reframing and extensions of Chapters 4 and 5 of Calculus on Manifolds]
[A standard treatment of the theory of smooth manifolds at the 1st year graduate level]
Mathematical analysis
Mathematics textbooks
Vector calculus
1965 non-fiction books | Calculus on Manifolds (book) | [
"Mathematics"
] | 896 | [
"Mathematical analysis"
] |
2,838,569 | https://en.wikipedia.org/wiki/Gartner%20hype%20cycle | The Gartner hype cycle is a graphical presentation developed, used and branded by the American research, advisory and information technology firm Gartner to represent the maturity, adoption, and social application of specific technologies. The hype cycle claims to provide a graphical and conceptual presentation of the maturity of emerging technologies through five phases.
Five phases
Each hype cycle drills down into the five key phases of a technology's life cycle.
1. Technology trigger
A potential technology breakthrough kicks things off. Early proof-of-concept stories and media interest trigger significant publicity. Often no usable products exist and commercial viability is unproven.
2. Peak of inflated expectations
Early publicity produces a number of success stories—often accompanied by scores of failures. Some companies take action; most do not.
3. Trough of disillusionment
Interest wanes as experiments and implementations fail to deliver. Producers of the technology shake out or fail. Investment continues only if the surviving providers improve their products to the satisfaction of early adopters.
4. Slope of enlightenment
More instances of the technology's benefits start to crystallize and become more widely understood. Second- and third-generation products appear from technology providers. More enterprises fund pilots; conservative companies remain cautious.
5. Plateau of productivity
Mainstream adoption starts to take off. Criteria for assessing provider viability are more clearly defined. The technology's broad market applicability and relevance are clearly paying off. If the technology has more than a niche market, then it will continue to grow.
The term "hype cycle" and each of the associated phases are now used more broadly in the marketing of new technologies.
Hype in new media
Hype (in the more general media sense of the term "hype") has played a large part in the adoption of new media. Analyses of the Internet in the 1990s featured large amounts of hype, and that created "debunking" responses. A longer-term historical perspective on such cycles can be found in the research of the economist Carlota Perez. Desmond Roger Laurence, in the field of clinical pharmacology, described a similar process in drug development in the seventies.
Criticisms
There have been numerous criticisms of the hype cycle, prominent among which are that it is not a cycle, that the outcome does not depend on the nature of the technology itself, that it is not scientific in nature, and that it does not reflect changes over time in the speed at which technology develops. Another is that it is limited in its application, as it prioritizes economic considerations in decision-making processes. It seems to assume that a business' performance is tied to the hype cycle, whereas this may actually have more to do with the way a company devises its branding strategy. A related criticism is that the "cycle" has no real benefits to the development or marketing of new technologies and merely comments on pre-existing trends. Specific disadvantages when compared to, for example, technology readiness level are:
The cycle is not scientific in nature, and there is no data or analysis that would justify the cycle.
With the (subjective) terms disillusionment, enlightenment and expectations it cannot be described objectively or clearly where technology now really is.
The terms are misleading in the sense that one gets the wrong idea what they can use a technology for. The user does not want to be disappointed, so should they stay away from technology in the Trough of Disillusionment?
No action perspective is offered to move technology to a next phase.
This appears to be a very simplified impulse response of an elastic system representable by a differential equation. Perhaps more telling would be to formulate a system model with solutions conforming to observable behavior.
An analysis of Gartner Hype Cycles since 2000 shows that few technologies actually travel through an identifiable hype cycle, and that in practice most of the important technologies adopted since 2000 were not identified early in their adoption cycles.
The Economist researched the hype cycle in 2024:
See also
AI winter, in referring to periods of disillusionment with artificial intelligence
Product lifecycle
Kondratiev wave
Roy Amara
Transient response
Dunning–Kruger effect
References
Further reading
External links
Hype Cycle Research Methodology, the official materials
Diffusion
Innovation economics
Innovation
Product development
Product lifecycle management
Science and technology studies
Sociology of culture
Technological change
Technology in society
Technology assessment | Gartner hype cycle | [
"Physics",
"Chemistry",
"Technology"
] | 881 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Technology assessment",
"Science and technology studies",
"nan"
] |
2,839,255 | https://en.wikipedia.org/wiki/MTOR | The mammalian target of rapamycin (mTOR), also referred to as the mechanistic target of rapamycin, and sometimes called FK506-binding protein 12-rapamycin-associated protein 1 (FRAP1), is a kinase that in humans is encoded by the MTOR gene. mTOR is a member of the phosphatidylinositol 3-kinase-related kinase family of protein kinases.
mTOR links with other proteins and serves as a core component of two distinct protein complexes, mTOR complex 1 and mTOR complex 2, which regulate different cellular processes. In particular, as a core component of both complexes, mTOR functions as a serine/threonine protein kinase that regulates cell growth, cell proliferation, cell motility, cell survival, protein synthesis, autophagy, and transcription. As a core component of mTORC2, mTOR also functions as a tyrosine protein kinase that promotes the activation of insulin receptors and insulin-like growth factor 1 receptors. mTORC2 has also been implicated in the control and maintenance of the actin cytoskeleton.
Discovery
Rapa Nui (Easter Island - Chile)
The study of TOR (Target Of Rapamycin) originated in the 1960s with an expedition to Easter Island (known by the island inhabitants as Rapa Nui), with the goal of identifying natural products from plants and soil with possible therapeutic potential. In 1972, Suren Sehgal identified a small molecule, from the soil bacterium Streptomyces hygroscopicus, that he purified and initially reported to possess potent antifungal activity. He named it rapamycin, noting its original source and activity. Early testing revealed that rapamycin also had potent immunosuppressive and cytostatic anti-cancer activity. Rapamycin did not initially receive significant interest from the pharmaceutical industry until the 1980s, when Wyeth-Ayerst supported Sehgal's efforts to further investigate rapamycin's effect on the immune system. This eventually led to its FDA approval as an immunosuppressant following kidney transplantation. However, prior to its FDA approval, how rapamycin worked remained completely unknown.
Subsequent history
The discovery of TOR and mTOR stemmed from independent studies of the natural product rapamycin by Joseph Heitman, Rao Movva, and Michael N. Hall in 1991; by David M. Sabatini, Hediye Erdjument-Bromage, Mary Lui, Paul Tempst, and Solomon H. Snyder in 1994; and by Candace J. Sabers, Mary M. Martin, Gregory J. Brunn, Josie M. Williams, Francis J. Dumont, Gregory Wiederrecht, and Robert T. Abraham in 1995. In 1991, working in yeast, Hall and colleagues identified the TOR1 and TOR2 genes. In 1993, Robert Cafferkey, George Livi, and colleagues, and Jeannette Kunz, Michael N. Hall, and colleagues independently cloned genes that mediate the toxicity of rapamycin in fungi, known as the TOR/DRR genes.
Rapamycin arrests fungal activity at the G1 phase of the cell cycle. In mammals, it suppresses the immune system by blocking the G1 to S phase transition in T-lymphocytes. Thus, it is used as an immunosuppressant following organ transplantation. Interest in rapamycin was renewed following the discovery of the structurally related immunosuppressive natural product FK506 (later called Tacrolimus) in 1987. In 1989–90, FK506 and rapamycin were determined to inhibit T-cell receptor (TCR) and IL-2 receptor signaling pathways, respectively. The two natural products were used to discover the FK506- and rapamycin-binding proteins, including FKBP12, and to provide evidence that FKBP12–FK506 and FKBP12–rapamycin might act through gain-of-function mechanisms that target distinct cellular functions. These investigations included key studies by Francis Dumont and Nolan Sigal at Merck contributing to show that FK506 and rapamycin behave as reciprocal antagonists. These studies implicated FKBP12 as a possible target of rapamycin, but suggested that the complex might interact with another element of the mechanistic cascade.
In 1991, calcineurin was identified as the target of FKBP12-FK506. That of FKBP12-rapamycin remained mysterious until genetic and molecular studies in yeast established FKBP12 as the target of rapamycin, and implicated TOR1 and TOR2 as the targets of FKBP12-rapamycin in 1991 and 1993, followed by studies in 1994 when several groups, working independently, discovered the mTOR kinase as its direct target in mammalian tissues. Sequence analysis of mTOR revealed it to be the direct ortholog of proteins encoded by the yeast target of rapamycin 1 and 2 (TOR1 and TOR2) genes, which Joseph Heitman, Rao Movva, and Michael N. Hall had identified in August 1991 and May 1993. Independently, George Livi and colleagues later reported the same genes, which they called dominant rapamycin resistance 1 and 2 (DRR1 and DRR2), in studies published in October 1993.
The protein, now called mTOR, was originally named FRAP by Stuart L. Schreiber and RAFT1 by David M. Sabatini; FRAP1 was used as its official gene symbol in humans. Because of these different names, mTOR, which had been first used by Robert T. Abraham, was increasingly adopted by the community of scientists working on the mTOR pathway to refer to the protein and in homage to the original discovery of the TOR protein in yeast that was named TOR, the Target of Rapamycin, by Joe Heitman, Rao Movva, and Mike Hall. TOR was originally discovered at the Biozentrum and Sandoz Pharmaceuticals in 1991 in Basel, Switzerland, and the name TOR pays further homage to this discovery, as TOR means doorway or gate in German, and the city of Basel was once ringed by a wall punctuated with gates into the city, including the iconic Spalentor. "mTOR" initially meant "mammalian target of rapamycin", but the meaning of the "m" was later changed to "mechanistic". Similarly, with subsequent discoveries the zebra fish TOR was named zTOR, the Arabidopsis thaliana TOR was named AtTOR, and the Drosophila TOR was named dTOR. In 2009 the FRAP1 gene name was officially changed by the HUGO Gene Nomenclature Committee (HGNC) to mTOR, which stands for mechanistic target of rapamycin.
The discovery of TOR and the subsequent identification of mTOR opened the door to the molecular and physiological study of what is now called the mTOR pathway and had a catalytic effect on the growth of the field of chemical biology, where small molecules are used as probes of biology.
Function
mTOR integrates the input from upstream pathways, including insulin, growth factors (such as IGF-1 and IGF-2), and amino acids. mTOR also senses cellular nutrient, oxygen, and energy levels. The mTOR pathway is a central regulator of mammalian metabolism and physiology, with important roles in the function of tissues including liver, muscle, white and brown adipose tissue, and the brain, and is dysregulated in human diseases, such as diabetes, obesity, depression, and certain cancers. Rapamycin inhibits mTOR by associating with its intracellular receptor FKBP12. The FKBP12–rapamycin complex binds directly to the FKBP12-Rapamycin Binding (FRB) domain of mTOR, inhibiting its activity.
In plants
Plants express the mechanistic target of rapamycin (mTOR) and have a TOR kinase complex. In plants, only the TORC1 complex is present unlike that of mammalian target of rapamycin which also contains the TORC2 complex. Plant species have TOR proteins in the protein kinase and FKBP-rapamycin binding (FRB) domains that share a similar amino acid sequence to mTOR in mammals.
Role of mTOR in plants
The TOR kinase complex has been known for having a role in the metabolism of plants. The TORC1 complex turns on when plants are living the proper environmental conditions to survive. Once activated, plant cells undergo particular anabolic reactions. These include plant development, translation of mRNA and the growth of cells within the plant. However, the TORC1 complex activation stops catabolic processes such as autophagy from occurring. TOR kinase signaling in plants has been found to aid in senescence, flowering, root and leaf growth, embryogenesis, and the meristem activation above the root cap of a plant. mTOR is also found to be highly involved in developing embryo tissue in plants.
Complexes
mTOR is the catalytic subunit of two structurally distinct complexes: mTORC1 and mTORC2. The two complexes localize to different subcellular compartments, thus affecting their activation and function. Upon activation by Rheb, mTORC1 localizes to the Ragulator-Rag complex on the lysosome surface where it then becomes active in the presence of sufficient amino acids.
mTORC1
mTOR Complex 1 (mTORC1) is composed of mTOR, regulatory-associated protein of mTOR (Raptor), mammalian lethal with SEC13 protein 8 (mLST8) and the non-core components PRAS40 and DEPTOR. This complex functions as a nutrient/energy/redox sensor and controls protein synthesis. The activity of mTORC1 is regulated by rapamycin, insulin, growth factors, phosphatidic acid, certain amino acids and their derivatives (e.g., -leucine and β-hydroxy β-methylbutyric acid), mechanical stimuli, and oxidative stress.
mTORC2
mTOR Complex 2 (mTORC2) is composed of MTOR, rapamycin-insensitive companion of MTOR (RICTOR), MLST8, and mammalian stress-activated protein kinase interacting protein 1 (mSIN1). mTORC2 has been shown to function as an important regulator of the actin cytoskeleton through its stimulation of F-actin stress fibers, paxillin, RhoA, Rac1, Cdc42, and protein kinase C α (PKCα). mTORC2 also phosphorylates the serine/threonine protein kinase Akt/PKB on serine residue Ser473, thus affecting metabolism and survival. Phosphorylation of Akt's serine residue Ser473 by mTORC2 stimulates Akt phosphorylation on threonine residue Thr308 by PDK1 and leads to full Akt activation. In addition, mTORC2 exhibits tyrosine protein kinase activity and phosphorylates the insulin-like growth factor 1 receptor (IGF-1R) and insulin receptor (InsR) on the tyrosine residues Tyr1131/1136 and Tyr1146/1151, respectively, leading to full activation of IGF-IR and InsR.
Inhibition by rapamycin
Rapamycin (Sirolimus) inhibits mTORC1, resulting in the suppression of cellular senescence. This appears to provide most of the beneficial effects of the drug (including life-span extension in animal studies). Suppression of insulin resistance by sirtuins accounts for at least some of this effect. Impaired sirtuin 3 leads to mitochondrial dysfunction.
Rapamycin has a more complex effect on mTORC2, inhibiting it only in certain cell types under prolonged exposure. Disruption of mTORC2 produces the diabetic-like symptoms of decreased glucose tolerance and insensitivity to insulin.
Gene deletion experiments
The mTORC2 signaling pathway is less defined than the mTORC1 signaling pathway. The functions of the components of the mTORC complexes have been studied using knockdowns and knockouts and were found to produce the following phenotypes:
NIP7: Knockdown reduced mTORC2 activity that is indicated by decreased phosphorylation of mTORC2 substrates.
RICTOR: Overexpression leads to metastasis and knockdown inhibits growth factor-induced PKC-phosphorylation. Constitutive deletion of Rictor in mice leads to embryonic lethality, while tissue specific deletion leads to a variety of phenotypes; a common phenotype of Rictor deletion in liver, white adipose tissue, and pancreatic beta cells is systemic glucose intolerance and insulin resistance in one or more tissues. Decreased Rictor expression in mice decreases male, but not female, lifespan.
mTOR: Inhibition of mTORC1 and mTORC2 by PP242 [2-(4-Amino-1-isopropyl-1H-pyrazolo[3,4-d]pyrimidin-3-yl)-1H-indol-5-ol] leads to autophagy or apoptosis; inhibition of mTORC2 alone by PP242 prevents phosphorylation of Ser-473 site on AKT and arrests the cells in G1 phase of the cell cycle. Genetic reduction of mTOR expression in mice significantly increases lifespan.
PDK1: Knockout is lethal; hypomorphic allele results in smaller organ volume and organism size but normal AKT activation.
AKT: Knockout mice experience spontaneous apoptosis (AKT1), severe diabetes (AKT2), small brains (AKT3), and growth deficiency (AKT1/AKT2). Mice heterozygous for AKT1 have increased lifespan.
TOR1, the S. cerevisiae orthologue of mTORC1, is a regulator of both carbon and nitrogen metabolism; TOR1 KO strains regulate response to nitrogen as well as carbon availability, indicating that it is a key nutritional transducer in yeast.
Clinical significance
Aging
Decreased TOR activity has been found to increase life span in S. cerevisiae, C. elegans, and D. melanogaster. The mTOR inhibitor rapamycin has been confirmed to increase lifespan in mice.
It is hypothesized that some dietary regimes, like caloric restriction and methionine restriction, cause lifespan extension by decreasing mTOR activity. Some studies have suggested that mTOR signaling may increase during aging, at least in specific tissues like adipose tissue, and rapamycin may act in part by blocking this increase. An alternative theory is mTOR signaling is an example of antagonistic pleiotropy, and while high mTOR signaling is good during early life, it is maintained at an inappropriately high level in old age. Calorie restriction and methionine restriction may act in part by limiting levels of essential amino acids including leucine and methionine, which are potent activators of mTOR. The administration of leucine into the rat brain has been shown to decrease food intake and body weight via activation of the mTOR pathway in the hypothalamus.
According to the free radical theory of aging, reactive oxygen species cause damage to mitochondrial proteins and decrease ATP production. Subsequently, via ATP sensitive AMPK, the mTOR pathway is inhibited and ATP-consuming protein synthesis is downregulated, since mTORC1 initiates a phosphorylation cascade activating the ribosome. Hence, the proportion of damaged proteins is enhanced. Moreover, disruption of mTORC1 directly inhibits mitochondrial respiration. These positive feedbacks on the aging process are counteracted by protective mechanisms: Decreased mTOR activity (among other factors) upregulates removal of dysfunctional cellular components via autophagy.
mTOR is a key initiator of the senescence-associated secretory phenotype (SASP). Interleukin 1 alpha (IL1A) is found on the surface of senescent cells where it contributes to the production of SASP factors due to a positive feedback loop with NF-κB. Translation of mRNA for IL1A is highly dependent upon mTOR activity. mTOR activity increases levels of IL1A, mediated by MAPKAPK2. mTOR inhibition of ZFP36L1 prevents this protein from degrading transcripts of numerous components of SASP factors.
Cancer
Over-activation of mTOR signaling significantly contributes to the initiation and development of tumors and mTOR activity was found to be deregulated in many types of cancer including breast, prostate, lung, melanoma, bladder, brain, and renal carcinomas. Reasons for constitutive activation are several. Among the most common are mutations in tumor suppressor PTEN gene. PTEN phosphatase negatively affects mTOR signalling through interfering with the effect of PI3K, an upstream effector of mTOR. Additionally, mTOR activity is deregulated in many cancers as a result of increased activity of PI3K or Akt. Similarly, overexpression of downstream mTOR effectors 4E-BP1, S6K1, S6K2 and eIF4E leads to poor cancer prognosis. Also, mutations in TSC proteins that inhibit the activity of mTOR may lead to a condition named tuberous sclerosis complex, which exhibits as benign lesions and increases the risk of renal cell carcinoma.
Increasing mTOR activity was shown to drive cell cycle progression and increase cell proliferation mainly due to its effect on protein synthesis. Moreover, active mTOR supports tumor growth also indirectly by inhibiting autophagy. Constitutively activated mTOR functions in supplying carcinoma cells with oxygen and nutrients by increasing the translation of HIF1A and supporting angiogenesis. mTOR also aids in another metabolic adaptation of cancerous cells to support their increased growth rate—activation of glycolytic metabolism. Akt2, a substrate of mTOR, specifically of mTORC2, upregulates expression of the glycolytic enzyme PKM2 thus contributing to the Warburg effect.
Central nervous system disorders / Brain function
Autism
mTOR is implicated in the failure of a 'pruning' mechanism of the excitatory synapses in autism spectrum disorders.
Alzheimer's disease
mTOR signaling intersects with Alzheimer's disease (AD) pathology in several aspects, suggesting its potential role as a contributor to disease progression. In general, findings demonstrate mTOR signaling hyperactivity in AD brains. For example, postmortem studies of human AD brain reveal dysregulation in PTEN, Akt, S6K, and mTOR. mTOR signaling appears to be closely related to the presence of soluble amyloid beta (Aβ) and tau proteins, which aggregate and form two hallmarks of the disease, Aβ plaques and neurofibrillary tangles, respectively. In vitro studies have shown Aβ to be an activator of the PI3K/AKT pathway, which in turn activates mTOR. In addition, applying Aβ to N2K cells increases the expression of p70S6K, a downstream target of mTOR known to have higher expression in neurons that eventually develop neurofibrillary tangles. Chinese hamster ovary cells transfected with the 7PA2 familial AD mutation also exhibit increased mTOR activity compared to controls, and the hyperactivity is blocked using a gamma-secretase inhibitor. These in vitro studies suggest that increasing Aβ concentrations increases mTOR signaling; however, significantly large, cytotoxic Aβ concentrations are thought to decrease mTOR signaling.
Consistent with data observed in vitro, mTOR activity and activated p70S6K have been shown to be significantly increased in the cortex and hippocampus of animal models of AD compared to controls. Pharmacologic or genetic removal of the Aβ in animal models of AD eliminates the disruption in normal mTOR activity, pointing to the direct involvement of Aβ in mTOR signaling. In addition, by injecting Aβ oligomers into the hippocampi of normal mice, mTOR hyperactivity is observed. Cognitive impairments characteristic of AD appear to be mediated by the phosphorylation of PRAS-40, which detaches from and allows for the mTOR hyperactivity when it is phosphorylated; inhibiting PRAS-40 phosphorylation prevents Aβ-induced mTOR hyperactivity. Given these findings, the mTOR signaling pathway appears to be one mechanism of Aβ-induced toxicity in AD.
The hyperphosphorylation of tau proteins into neurofibrillary tangles is one hallmark of AD. p70S6K activation has been shown to promote tangle formation as well as mTOR hyperactivity through increased phosphorylation and reduced dephosphorylation. It has also been proposed that mTOR contributes to tau pathology by increasing the translation of tau and other proteins.
Synaptic plasticity is a key contributor to learning and memory, two processes that are severely impaired in AD patients. Translational control, or the maintenance of protein homeostasis, has been shown to be essential for neural plasticity and is regulated by mTOR. Both protein over- and under-production via mTOR activity seem to contribute to impaired learning and memory. Furthermore, given that deficits resulting from mTOR overactivity can be alleviated through treatment with rapamycin, it is possible that mTOR plays an important role in affecting cognitive functioning through synaptic plasticity. Further evidence for mTOR activity in neurodegeneration comes from recent findings demonstrating that eIF2α-P, an upstream target of the mTOR pathway, mediates cell death in prion diseases through sustained translational inhibition.
Some evidence points to mTOR's role in reduced Aβ clearance as well. mTOR is a negative regulator of autophagy; therefore, hyperactivity in mTOR signaling should reduce Aβ clearance in the AD brain. Disruptions in autophagy may be a potential source of pathogenesis in protein misfolding diseases, including AD. Studies using mouse models of Huntington's disease demonstrate that treatment with rapamycin facilitates the clearance of huntingtin aggregates. Perhaps the same treatment may be useful in clearing Aβ deposits as well.
Lymphoproliferative diseases
Hyperactive mTOR pathways have been identified in certain lymphoproliferative diseases such as autoimmune lymphoproliferative syndrome (ALPS), multicentric Castleman disease, and post-transplant lymphoproliferative disorder (PTLD).
Protein synthesis and cell growth
mTORC1 activation is required for myofibrillar muscle protein synthesis and skeletal muscle hypertrophy in humans in response to both physical exercise and ingestion of certain amino acids or amino acid derivatives. Persistent inactivation of mTORC1 signaling in skeletal muscle facilitates the loss of muscle mass and strength during muscle wasting in old age, cancer cachexia, and muscle atrophy from physical inactivity. mTORC2 activation appears to mediate neurite outgrowth in differentiated mouse neuro2a cells. Intermittent mTOR activation in prefrontal neurons by β-hydroxy β-methylbutyrate inhibits age-related cognitive decline associated with dendritic pruning in animals, which is a phenomenon also observed in humans.
Lysosomal damage inhibits mTOR and induces autophagy
Active mTORC1 is positioned on lysosomes. mTOR is inhibited when lysosomal membrane is damaged by various exogenous or endogenous agents, such as invading bacteria, membrane-permeant chemicals yielding osmotically active products (this type of injury can be modeled using membrane-permeant dipeptide precursors that polymerize in lysosomes), amyloid protein aggregates (see above section on Alzheimer's disease) and cytoplasmic organic or inorganic inclusions including urate crystals and crystalline silica. The process of mTOR inactivation following lysosomal/endomembrane is mediated by the protein complex termed GALTOR. At the heart of GALTOR is galectin-8, a member of β-galactoside binding superfamily of cytosolic lectins termed galectins, which recognizes lysosomal membrane damage by binding to the exposed glycans on the lumenal side of the delimiting endomembrane. Following membrane damage, galectin-8, which normally associates with mTOR under homeostatic conditions, no longer interacts with mTOR but now instead binds to SLC38A9, RRAGA/RRAGB, and LAMTOR1, inhibiting Ragulator's (LAMTOR1-5 complex) guanine nucleotide exchange function-
TOR is a negative regulator of autophagy in general, best studied during response to starvation, which is a metabolic response. During lysosomal damage however, mTOR inhibition activates autophagy response in its quality control function, leading to the process termed lysophagy that removes damaged lysosomes. At this stage another galectin, galectin-3, interacts with TRIM16 to guide selective autophagy of damaged lysosomes. TRIM16 gathers ULK1 and principal components (Beclin 1 and ATG16L1) of other complexes (Beclin 1-VPS34-ATG14 and ATG16L1-ATG5-ATG12) initiating autophagy, many of them being under negative control of mTOR directly such as the ULK1-ATG13 complex, or indirectly, such as components of the class III PI3K (Beclin 1, ATG14 and VPS34) since they depend on activating phosphorylations by ULK1 when it is not inhibited by mTOR. These autophagy-driving components physically and functionally link up with each other integrating all processes necessary for autophagosomal formation: (i) the ULK1-ATG13-FIP200/RB1CC1 complex associates with the LC3B/GABARAP conjugation machinery through direct interactions between FIP200/RB1CC1 and ATG16L1, (ii) ULK1-ATG13-FIP200/RB1CC1 complex associates with the Beclin 1-VPS34-ATG14 via direct interactions between ATG13's HORMA domain and ATG14, (iii) ATG16L1 interacts with WIPI2, which binds to PI3P, the enzymatic product of the class III PI3K Beclin 1-VPS34-ATG14. Thus, mTOR inactivation, initiated through GALTOR upon lysosomal damage, plus a simultaneous activation via galectin-9 (which also recognizes lysosomal membrane breach) of AMPK that directly phosphorylates and activates key components (ULK1, Beclin 1) of the autophagy systems listed above and further inactivates mTORC1, allows for strong autophagy induction and autophagic removal of damaged lysosomes.
Additionally, several types of ubiquitination events parallel and complement the galectin-driven processes: Ubiquitination of TRIM16-ULK1-Beclin-1 stabilizes these complexes to promote autophagy activation as described above. ATG16L1 has an intrinsic binding affinity for ubiquitin); whereas ubiquitination by a glycoprotein-specific FBXO27-endowed ubiquitin ligase of several damage-exposed glycosylated lysosomal membrane proteins such as LAMP1, LAMP2, GNS/N-acetylglucosamine-6-sulfatase, TSPAN6/tetraspanin-6, PSAP/prosaposin, and TMEM192/transmembrane protein 192 may contribute to the execution of lysophagy via autophagic receptors such as p62/SQSTM1, which is recruited during lysophagy, or other to be determined functions.
Scleroderma
Scleroderma, also known as systemic sclerosis, is a chronic systemic autoimmune disease characterised by hardening (sclero) of the skin (derma) that affects internal organs in its more severe forms. mTOR plays a role in fibrotic diseases and autoimmunity, and blockade of the mTORC pathway is under investigation as a treatment for scleroderma.
Smith-Kingsmore syndrome
A rare gain-of-function mutation causes Smith-Kingsmore syndrome.
mTOR inhibitors as therapies
Transplantation
mTOR inhibitors, e.g. rapamycin, are already used to prevent transplant rejection.
Glycogen storage disease
Some articles reported that rapamycin can inhibit mTORC1 so that the phosphorylation of GS (glycogen synthase) can be increased in skeletal muscle. This discovery represents a potential novel therapeutic approach for glycogen storage disease that involve glycogen accumulation in muscle.
Anti-cancer
There are two primary mTOR inhibitors used in the treatment of human cancers, temsirolimus and everolimus. mTOR inhibitors have found use in the treatment of a variety of malignancies, including renal cell carcinoma (temsirolimus) and pancreatic cancer, breast cancer, and renal cell carcinoma (everolimus). The complete mechanism of these agents is not clear, but they are thought to function by impairing tumour angiogenesis and causing impairment of the G1/S transition.
Anti-aging
mTOR inhibitors may be useful for treating/preventing several age-associated conditions, including neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease. After a short-term treatment with the mTOR inhibitors dactolisib and everolimus, in elderly (65 and older), treated subjects had a reduced number of infections over the course of a year.
Various natural compounds, including epigallocatechin gallate (EGCG), caffeine, curcumin, berberine, quercetin, resveratrol and pterostilbene, have been reported to inhibit mTOR when applied to isolated cells in culture. As yet no high quality evidence exists that these substances inhibit mTOR signaling or extend lifespan when taken as dietary supplements by humans, despite encouraging results in animals such as fruit flies and mice. Various trials are ongoing.
Interactions
Mechanistic target of rapamycin has been shown to interact with:
ABL1,
AKT1,
IGF-IR,
InsR,
CLIP1,
EIF3F
EIF4EBP1,
FKBP1A,
GPHN,
KIAA1303,
PRKCD,
RHEB,
RICTOR,
RPS6KB1,
STAT1,
STAT3,
Two-pore channels: TPCN1; TPCN2, and
UBQLN1.
References
Further reading
External links
EC 2.7.11
Signal transduction
Tor signaling pathway
Human proteins
Aging-related proteins | MTOR | [
"Chemistry",
"Biology"
] | 6,623 | [
"Signal transduction",
"Tor signaling pathway",
"Senescence",
"Biochemistry",
"Neurochemistry",
"Aging-related proteins"
] |
2,839,396 | https://en.wikipedia.org/wiki/Cassie%27s%20law | Cassie's law, or the Cassie equation, describes the effective contact angle θc for a liquid on a chemically heterogeneous surface, i.e. the surface of a composite material consisting of different chemistries, that is, non-uniform throughout. Contact angles are important as they quantify a surface's wettability, the nature of solid-fluid intermolecular interactions. Cassie's law is reserved for when a liquid completely covers both smooth and rough heterogeneous surfaces.
More of a rule than a law, the formula found in literature for two materials is;
where and are the contact angles for components 1 with fractional surface area , and 2 with fractional surface area in the composite material respectively. If there exist more than two materials then the equation is scaled to the general form of;
, with .
Cassie-Baxter
Cassie's law takes on special meaning when the heterogeneous surface is a porous medium. now represents the solid surface area and air gaps, such that the surface is no longer completely wet. Air creates a contact angle of and because = , the equation reduces to:
, which is the Cassie-Baxter equation.
Unfortunately the terms Cassie and Cassie-Baxter are often used interchangeably but they should not be confused. The Cassie-Baxter equation is more common in nature, and focuses on the 'incomplete coating''' of surfaces by a liquid only. In the Cassie-Baxter state liquids sit upon asperities, resulting in air pockets that are bounded between the surface and liquid.
Homogeneous surfaces
The Cassie-Baxter equation is not restricted to only chemically heterogeneous surfaces, as air within porous homogeneous surfaces will make the system heterogeneous. However, if the liquid penetrates the grooves, the surface returns to homogeneity and neither of the previous equations can be used. In this case the liquid is in the Wenzel state, governed by a separate equation. Transitions between the Cassie-Baxter state and the Wenzel state can take place when external stimuli such as pressure or vibration are applied to the liquid on the surface.
Equation origin
When a liquid droplet interacts with a solid surface, its behaviour is governed by surface tension and energy. The liquid droplet could spread indefinitely or it could sit on the surface like a spherical cap at which point there exists a contact angle.
Defining as the free energy change per unit area caused by a liquid spreading,
where , are the fractional areas of the two materials on the heterogeneous surface, and and the interfacial tensions between solid, air and liquid.
The contact angle for the heterogeneous surface is given by,
, with the interfacial tension between liquid and air.
The contact angle given by the Young equation is,
Thus by substituting the first expression into Young's equation, we arrive at Cassie's law for heterogeneous surfaces,
History behind Cassie's law
Young's law
Studies concerning the contact angle existing between a liquid and a solid surface began with Thomas Young in 1805. The Young equation
reflects the relative strength of the interaction between surface tensions at the three phase contact, and is the geometric ratio between the energy gained in forming a unit area of the solid–liquid interface to that required to form a liquid–air interface. However Young's equation only works for ideal and real surfaces and in practice most surfaces are microscopically rough.
Wenzel state
In 1936 Young's equation was modified by Robert Wenzel to account for rough homogeneous surfaces, and a parameter was introduced, defined as the ratio of the true area of the solid compared to its nominal. Known as the Wenzel equation,
shows that the apparent contact angle, the angle measured at casual inspection, will increase if the surface is roughened. Liquids with contact angle are known to be in the Wenzel state.
Cassie-Baxter state
The notion of roughness effecting the contact angle was extended by Cassie and Baxter in 1944 when they focused on porous mediums, where liquid does not penetrate the grooves on rough surface and leaves air gaps. They devised the Cassie-Baxter equation;
, sometimes written as where the has become .
Cassie's Law
In 1948 Cassie refined this for two materials with different chemistries on both smooth and rough surfaces, resulting in the aforementioned Cassie's law
Arguments and inconsistencies
Following the discovery of superhydrophobic surfaces in nature and the growth of their application in industry, the study of contact angles and wetting has been widely reexamined. Some claim that Cassie's equations are more fortuitous than fact with it being argued that emphasis should not be placed on fractional contact areas but actually the behaviour of the liquid at the three phase contact line. They do not argue never using the Wenzel and Cassie-Baxter's equations but that “they should be used with knowledge of their faults”. However the debate continues, as this argument was evaluated and criticised with the conclusion being drawn that contact angles on surfaces can be described by the Cassie and Cassie-Baxter equations provided the surface fraction and roughness parameters are reinterpreted to take local values appropriate to the droplet. This is why Cassie's law is actually more of a rule. Examples
It is widely agreed that the water repellency of biological objects is due to the Cassie-Baxter equation. If water has a contact angle between , then the surface is classed as hydrophilic, whereas a surface producing a contact angle between is hydrophobic. In the special cases where the Contact angle is , then it is known as superhydrophobic.
Lotus Effect
One example of a superhydrophobic surface in nature is the Lotus leaf. Lotus leaves have a typical contact angle of , ultra low water adhesion due to minimal contact areas, and a self cleaning property which is characterised by the Cassie-Baxter equation. The microscopic architecture of the Lotus leaf means that water will not penetrate nanofolds on the surface, leaving air pockets below. The water droplets become suspended in the Cassie-Baxter state and are able to roll off the leaf picking up dirt as they do so, thus cleaning'' the leaf.
Feathers
The Cassie–Baxter wetting regime also explains the water repellent features of the pennae (feathers) of a bird. The feather consists of a topography network of 'barbs and barbules' and a droplet that is deposited on a these resides in a solid-liquid-air non-wetting composite state, where tiny air pockets are trapped within.
See also
Wetting
Ultrahydrophobicity
Contact angle
Goniometer
Droplet
Lotus effect
References
Fluid mechanics
Surface science | Cassie's law | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,348 | [
"Civil engineering",
"Fluid mechanics",
"Condensed matter physics",
"Surface science"
] |
3,804,402 | https://en.wikipedia.org/wiki/Bigraph | A bigraph can be modelled as the superposition of a graph (the link graph) and a set of trees (the place graph).
Each node of the bigraph is part of a graph and also part of some tree that describes how the nodes are nested. Bigraphs can be conveniently and formally displayed as diagrams. They have applications in the modelling of distributed systems for ubiquitous computing and can be used to describe mobile interactions. They have also been used by Robin Milner in an attempt to subsume Calculus of Communicating Systems (CCS) and π-calculus. They have been studied in the context of category theory.
Anatomy of a bigraph
Aside from nodes and (hyper-)edges, a bigraph may have associated with it one or more regions which are roots in the place forest, and zero or more holes in the place graph, into which other bigraph regions may be inserted. Similarly, to nodes we may assign controls that define identities and an arity (the number of ports for a given node to which link-graph edges may connect). These controls are drawn from a bigraph signature. In the link graph we define inner and outer names, which define the connection points at which coincident names may be fused to form a single link.
Foundations
A bigraph is a 5-tuple:
where is a set of nodes, is a set of edges, is the control map that assigns controls to nodes, is the parent map that defines the nesting of nodes, and is the link map that defines the link structure.
The notation indicates that the bigraph has holes (sites) and a set of inner names and regions, with a set of outer names . These are respectively known as the inner and outer interfaces of the bigraph.
Formally speaking, each bigraph is an arrow in a symmetric partial monoidal category (usually abbreviated spm-category) in which the objects are these interfaces. As a result, the composition of bigraphs is definable in terms of the composition of arrows in the category.
Extensions and variants
Directed Bigraphs
Directed Bigraphs are a generalisation of bigraphs where hyper-edges of the link-graph are directed. Ports and names of the interfaces are extended with a polarity (positive or negative) with the requirement that the direction of hyper-edges goes from negative to positive.
Directed bigraphs were introduced as a meta-model for describing computation paradigms dealing with locations and resource communication where a directed link-graph provides a natural description of resource dependencies or information flow. Examples of areas of applications are security protocols, resource access management, and cloud computing.
Bigraphs with sharing
Bigraphs with sharing are a generalisation of Milner's formalisation that allows for a straightforward representation of overlapping or intersecting spatial locations. In bigraphs with sharing, the place graph is defined as a directed acyclic graph (DAG), i.e. is a binary relation instead of a map. The definition of link graph is unaffected by the introduction of sharing. Note that standard bigraphs are a sub-class of bigraphs with sharing.
Areas of application of bigraphs with sharing include wireless networking protocols, real-time management of domestic wireless networks and mixed reality systems.
Tools and Implementations
BigraphER is a modelling and reasoning environment for bigraphs consisting of an OCaml library and a command-line tool providing an efficient implementation of rewriting, simulation, and visualisation for both bigraphs and bigraphs with sharing.
jLibBig is a Java library providing efficient and extensible implementation of reactive systems for both bigraphs and directed bigraphs.
No longer actively developed:
BigMC is model checker for bigraphs which includes a command line interface and visualisation.
Big Red is a graphical editor for bigraphs with easily extensible support for various file formats.
SBAM is a stochastic simulator for bigraphs, aimed at simulation of biological models.
DBAM is a distributed simulator for reactive systems.
DBtk is a toolkit for directed bigraphs that provides calculation of IPOs, matching, and visualisation.
See also
Bisimulation
Combinatorial species
Bibliography
References
External links
Bibliography on Bigraphs
Formal methods
Theoretical computer science | Bigraph | [
"Mathematics",
"Engineering"
] | 873 | [
"Theoretical computer science",
"Applied mathematics",
"Software engineering",
"Formal methods"
] |
3,804,552 | https://en.wikipedia.org/wiki/Horner%E2%80%93Wadsworth%E2%80%93Emmons%20reaction | The Horner–Wadsworth–Emmons (HWE) reaction is a chemical reaction used in organic chemistry of stabilized phosphonate carbanions with aldehydes (or ketones) to produce predominantly E-alkenes.
In 1958, Leopold Horner published a modified Wittig reaction using phosphonate-stabilized carbanions. William S. Wadsworth and William D. Emmons further defined the reaction.
In contrast to phosphonium ylides used in the Wittig reaction, phosphonate-stabilized carbanions are more nucleophilic but less basic. Likewise, phosphonate-stabilized carbanions can be alkylated. Unlike phosphonium ylides, the dialkylphosphate salt byproduct is easily removed by aqueous extraction.
Several reviews have been published.
Reaction mechanism
The Horner–Wadsworth–Emmons reaction begins with the deprotonation of the phosphonate to give the phosphonate carbanion 1. Nucleophilic addition of the carbanion onto the aldehyde 2 (or ketone) producing 3a or 3b is the rate-limiting step. If R2 = H, then intermediates 3a and 4a and intermediates 3b and 4b can interconvert with each other. The final elimination of oxaphosphetanes 4a and 4b yield (E)-alkene 5 and (Z)-alkene 6, with the by-product being a dialkyl-phosphate.
The ratio of alkene isomers 5 and 6 is not dependent upon the stereochemical outcome of the initial carbanion addition and upon the ability of the intermediates to equilibrate.
The electron-withdrawing group (EWG) alpha to the phosphonate is necessary for the final elimination to occur. In the absence of an electron-withdrawing group, the final product is the β-hydroxyphosphonate 3a and 3b. However, these β-hydroxyphosphonates can be transformed to alkenes by reaction with diisopropylcarbodiimide.
Stereoselectivity
The Horner–Wadsworth–Emmons reaction favours the formation of (E)-alkenes. In general, the more equilibration amongst intermediates, the higher the selectivity for (E)-alkene formation.
Disubstituted alkenes
Thompson and Heathcock have performed a systematic study of the reaction of methyl 2-(dimethoxyphosphoryl)acetate with various aldehydes. While each effect was small, they had a cumulative effect making it possible to modify the stereochemical outcome without modifying the structure of the phosphonate. They found greater (E)-stereoselectivity with the following conditions:
Increasing steric bulk of the aldehyde
Higher reaction temperatures (23 °C over −78 °C)
Li > Na > K salts
In a separate study, it was found that bulky phosphonate and bulky electron-withdrawing groups enhance E-alkene selectivity.
Trisubstituted alkenes
The steric bulk of the phosphonate and electron-withdrawing groups plays a critical role in the reaction of α-branched phosphonates with aliphatic aldehydes.
Aromatic aldehydes produce almost exclusively (E)-alkenes. In case (Z)-alkenes from aromatic aldehydes are needed, the Still–Gennari modification (see below) can be used.
Olefination of ketones
The stereoselectivity of the Horner–Wadsworth–Emmons reaction of ketones is poor to modest.
Variations
Base sensitive substrates
Since many substrates are not stable to sodium hydride, several procedures have been developed using milder bases. Masamune and Roush have developed mild conditions using lithium chloride and DBU. Rathke extended this to lithium or magnesium halides with triethylamine. Several other bases have been found effective.
Gennari-Still modification
W. Clark Still and C. Gennari have developed conditions that give Z-alkenes with excellent stereoselectivity. Using phosphonates with electron-withdrawing groups (trifluoroethyl) together with strongly dissociating conditions (KHMDS and 18-crown-6 in THF) nearly exclusive Z-alkene production can be achieved.
Ando has suggested that the use of electron-deficient phosphonates accelerates the elimination of the oxaphosphetane intermediates.
See also
Wittig reaction
Michaelis–Arbuzov reaction
Michaelis–Becker reaction
Peterson reaction
Tebbe olefination
References
Olefination reactions
Carbon-carbon bond forming reactions
Name reactions | Horner–Wadsworth–Emmons reaction | [
"Chemistry"
] | 1,005 | [
"Olefination reactions",
"Carbon-carbon bond forming reactions",
"Coupling reactions",
"Organic reactions",
"Name reactions"
] |
28,900,256 | https://en.wikipedia.org/wiki/Mean%20inter-particle%20distance | Mean inter-particle distance (or mean inter-particle separation) is the mean distance between microscopic particles (usually atoms or molecules) in a macroscopic body.
Ambiguity
From the very general considerations, the mean inter-particle distance is proportional to the size of the per-particle volume , i.e.,
where is the particle density. However, barring a few simple cases such as the ideal gas model, precise calculations of the proportionality factor are impossible analytically. Therefore, approximate expressions are often used. One such estimation is the Wigner–Seitz radius
which corresponds to the radius of a sphere having per-particle volume . Another popular definition is
,
corresponding to the length of the edge of the cube with the per-particle volume . The two definitions differ by a factor of approximately , so one has to exercise care if an article fails to define the parameter exactly. On the other hand, it is often used in qualitative statements where such a numeric factor is either irrelevant or plays an insignificant role, e.g.,
"a potential energy ... is proportional to some power n of the inter-particle distance r" (Virial theorem)
"the inter-particle distance is much larger than the thermal de Broglie wavelength" (Kinetic theory)
Ideal gas
Nearest neighbor distribution
We want to calculate probability distribution function of distance to the nearest neighbor (NN) particle. (The problem was first considered by Paul Hertz; for a modern derivation see, e.g.,.) Let us assume particles inside a sphere having volume , so that . Note that since the particles in the ideal gas are non-interacting, the probability of finding a particle at a certain distance from another particle is the same as the probability of finding a particle at the same distance from any other point; we shall use the center of the sphere.
An NN particle at a distance means exactly one of the particles resides at that distance while the rest
particles are at larger distances, i.e., they are somewhere outside the sphere with radius .
The probability to find a particle at the distance from the origin between and is
, plus we have kinds of way to choose which particle, while the probability to find a particle outside that sphere is . The sought-for expression is then
where we substituted
Note that is the Wigner-Seitz radius. Finally, taking the limit and using , we obtain
One can immediately check that
The distribution peaks at
Mean distance and higher moments
or, using the substitution,
where is the gamma function. Thus,
In particular,
References
See also
Wigner–Seitz radius
Concepts in physics
Density | Mean inter-particle distance | [
"Physics",
"Mathematics"
] | 530 | [
"Physical quantities",
"Quantity",
"Mass",
"Density",
"nan",
"Wikipedia categories named after physical quantities",
"Matter"
] |
28,903,728 | https://en.wikipedia.org/wiki/Thermoremanent%20magnetization | When an igneous rock cools, it acquires a thermoremanent magnetization (TRM) from the Earth's field. TRM can be much larger than it would be if exposed to the same field at room temperature (see isothermal remanence). This remanence can also be very stable, lasting without significant change for millions of years. TRM is the main reason that paleomagnetists are able to deduce the direction and magnitude of the ancient Earth's field.
History
As early as the eleventh century, the Chinese were aware that a piece of iron could be magnetized by heating it until it was red hot, then quenching in water. While quenching it was oriented in the Earth's field to get the desired polarity. In 1600, William Gilbert published De Magnete (1600), a report of a series of meticulous experiments in magnetism. In it, he described the quenching of a steel rod in the direction of the Earth's field, and he may have been aware of the Chinese work.
In the early 20th century, a few investigators found that igneous rocks had a remanence that was much more intense than remanence acquired in the Earth's field without heating; that heating rocks in the Earth's magnetic field could magnetize them in the direction of the field; and that the Earth's field had reversed its direction in the past.
In paleomagnetism
Demagnetization
It has long been known that a TRM can be removed if it is heated above the Curie temperature of the minerals carrying it. A TRM can also be partially demagnetized by heating up to some lower temperature and cooling back to room temperature. A common procedure in paleomagnetism is stepwise demagnetization, in which the sample is heated to a series of temperatures , cooling to room temperature and measuring the remaining remanence in between each heating step. The series of remanences can be plotted in a variety of ways, depending on the application.
Partial TRM
If a rock is later re-heated (as a result of burial, for example), part or all of the TRM can be replaced by a new remanence. If it is only part of the remanence, it is known as partial thermoremanent magnetization (pTRM). Because numerous experiments have been done modeling different ways of acquiring remanence, pTRM can have other meanings. For example, it can also be acquired in the laboratory by cooling in zero field to a temperature (below the Curie temperature), applying a magnetic field and cooling to a temperature , then cooling the rest of the way to room temperature in zero field.
Ideal TRM behavior
The Thellier laws
The ideal TRM is one that can record the magnetic field in such a way that both its direction and intensity can be measured by some process in the lab. Thellier showed that this could be done if pTRM's satisfied four laws. Suppose that A and B are two non-overlapping temperature intervals. Suppose that is a pTRM that is acquired by cooling the sample to room temperature, only switching the field on while the temperature is in interval A; has a similar definition. The Thellier laws are
Linearity: and are proportional to when is not much larger than the present Earth's field.
Reciprocity: can be removed by heating through temperature interval , and through .
Independence: and are independent.
Additivity: If is acquired by turning the field on in both temperature intervals, .
If these laws hold for any non-overlapping temperature intervals and , the sample satisfies the Thellier laws.
A simple model for the Thellier laws
Suppose that a sample has a lot of magnetic minerals, each of which has the following property: It is superparamagnetic until the temperature reaches a blocking temperature that is independent of magnetic field for small fields. No irreversible changes occur at temperatures below . If the resulting TRM is heated in zero field, it becomes superparamagnetic again at an unblocking temperature that is equal to . Then it is easy to verify that reciprocity, independence and additivity hold. It only remains for linearity to be satisfied for all the Thellier laws to be obeyed.
The Néel model for single-domain TRM
Louis Néel developed a physical model that showed how real magnetic minerals could have the above properties. It applies to particles that are single-domain, having a uniform magnetization that can only rotate as a unit.
See also
Rock magnetism
References
Rock magnetism
Geomagnetism
Ferromagnetism | Thermoremanent magnetization | [
"Chemistry",
"Materials_science"
] | 965 | [
"Magnetic ordering",
"Ferromagnetism"
] |
31,477,756 | https://en.wikipedia.org/wiki/Quantum%20capacity | In the theory of quantum communication, the quantum capacity is the highest rate at which quantum information can be communicated over many independent uses of a noisy quantum channel from a sender to a receiver. It is also equal to the highest rate at which entanglement can be generated over the channel, and forward classical communication cannot improve it. The quantum capacity theorem is important for the theory of quantum error correction, and more broadly for the theory of quantum computation. The theorem giving a lower bound on the quantum capacity of any channel is colloquially known as the LSD theorem, after the authors Lloyd, Shor, and Devetak who proved it with increasing standards of rigor.
Hashing bound for Pauli channels
The LSD theorem states that the coherent information of a quantum channel is an achievable rate for reliable quantum communication. For a Pauli channel, the coherent information has a simple form and the proof that it is achievable is particularly simple as well. We prove the theorem for this special case by exploiting random stabilizer codes and correcting only the likely errors that the channel produces.
Theorem (hashing bound). There exists a stabilizer quantum error-correcting code that achieves the hashing limit for a Pauli channel of the following form:where and is the entropy of this probability vector.
Proof. Consider correcting only the typical errors. That is, consider defining the
typical set of errors as follows:where is some sequence consisting of the letters and is the probability that an IID Pauli channel issues some tensor-product error . This typical set consists of the likely errors in the sense thatfor all and sufficiently large . The error-correcting
conditions for a stabilizer code in this case are that is a correctable set of errors if
for all error pairs and such that where is the normalizer of . Also, we consider the expectation of the error probability under a random choice of a stabilizer code.
Proceed as follows:The first equality follows by definition— is an indicator function equal to one if is uncorrectable under and equal to zero otherwise. The first inequality follows, since we correct only the typical errors because the atypical error set has negligible probability mass. The second equality follows by exchanging the expectation and the sum. The third equality follows because the expectation of an indicator function is the probability that the event it selects occurs.
Continuing, we have:
The first equality follows from the error-correcting conditions for a quantum stabilizer code, where is the normalizer of
. The first inequality follows by ignoring any potential degeneracy in the code—we consider an error uncorrectable if it lies in the normalizer and the probability can only be larger because . The second equality follows by realizing that the probabilities for the existence criterion and the union of events are equivalent. The second inequality follows by applying the union bound. The third inequality follows from the fact that the probability for a fixed operator not equal to the identity commuting with
the stabilizer operators of a random stabilizer can be upper bounded as follows:
The reasoning here is that the random choice of a stabilizer code is equivalent to
fixing operators , ..., and performing a uniformly random
Clifford unitary. The probability that a fixed operator commutes with
, ..., is then just the number of
non-identity operators in the normalizer () divided by the total number of non-identity operators (). After applying the above bound, we then exploit the following typicality bounds:
We conclude that as long as the rate , the expectation of the error probability becomes arbitrarily small, so that there exists at least one choice of a stabilizer code with the same bound on the error probability.
See also
Quantum computing
References
Quantum information science
Quantum information theory
Models of computation
Quantum cryptography
Theoretical computer science
Classes of computers
Information theory
Computational complexity theory
Limits of computation | Quantum capacity | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 791 | [
"Physical phenomena",
"Telecommunications engineering",
"Theoretical computer science",
"Applied mathematics",
"Computer systems",
"Computer science",
"Information theory",
"Computers",
"Limits of computation",
"Classes of computers"
] |
31,479,005 | https://en.wikipedia.org/wiki/Dynamic%20timing%20verification | Dynamic timing verification is a verification that an ASIC design is fast enough to run without errors at the targeted clock rate. This is accomplished by simulating the design files used to synthesize the integrated circuit (IC) design. This is in contrast to static timing analysis, which has a similar goal as dynamic timing verification except it does not require simulating the real functionality of the IC.
Hobbyists often perform a type of dynamic timing verification when they over-clock the CPUs in their computers in order to find the fastest clock rate at which they can run the CPU without errors. This is a type of dynamic timing verification that is performed after the silicon is manufactured. In the field of ASIC design, this timing verification is preferably performed before manufacturing the IC in order to make sure that IC works under the required conditions before mass production of the IC.
See also
Dynamic timing analysis
References
Timing in electronic circuits
Formal methods | Dynamic timing verification | [
"Engineering"
] | 186 | [
"Software engineering",
"Formal methods"
] |
31,479,217 | https://en.wikipedia.org/wiki/Non-B%20database | Non-B DB is a database integrating annotations and analysis of non-B DNA-forming sequence motifs. The database provides alternative DNA structure predictions including Z-DNA motifs, quadruplex-forming motifs, inverted repeats, mirror repeats and direct repeats and their associated subsets of cruciforms, triplex and slipped structures, respectively.
See also
B-DNA
non-B DNA
References
External links
http://nonb.abcc.ncifcrf.gov.
Biological databases
DNA
Biophysics
Molecular geometry
Helices | Non-B database | [
"Physics",
"Chemistry",
"Biology"
] | 111 | [
"Applied and interdisciplinary physics",
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Bioinformatics",
"Biophysics",
"Stereochemistry stubs",
"Biological databases",
"Matter"
] |
31,479,811 | https://en.wikipedia.org/wiki/CyTOF | Cytometry by time of flight, or CyTOF, is an application of mass cytometry used to quantify labeled targets on the surface and interior of single cells. CyTOF allows the quantification of multiple cellular components simultaneously using an ICP-MS detector.
CyTOF takes advantage of immunolabeling to quantify proteins, carbohydrates or lipids in a cell. Targets are selected to answer a specific research question and are labeled with lanthanide metal tagged antibodies. Labeled cells are nebulized and mixed with heated argon gas to dry the cell containing particles. The sample-gas mixture is focused and ignited with an argon plasma torch. This breaks the cells into their individual atoms and creates an ion cloud. Abundant low weight ions generated from environmental air and biological molecules are removed using a quadrupole mass analyzer. The remaining heavy ions from the antibody tags are quantified by Time-of-flight mass spectrometry. Ion abundances correlate with the amount of target per cell and can be used to infer cellular qualities.
Mass spectrometry's sensitivity to detect different ions allows measurements of upwards of 50 targets per cell while avoiding issues with spectral overlap seen when using fluorescent probes. However, this sensitivity also means trace heavy metal contamination is a concern. Using large numbers of probes creates new problems in analyzing the high dimensional data generated.
History
In 1994 Tsutomu Nomizu and colleagues at Nagoya University performed the first mass spectrometry experiments of single cells. Nomizu realized that single cells could be nebulized, dried, and ignited in plasma to generate clouds of ions which could be detected by emission spectrometry. In this type of experiment elements such as calcium within the cell could be quantified. Inspired by Flow cytometry, in 2007 Scott D. Tanner built upon this ICP-MS with the first multiplexed assay using lanthanide metals to label DNA and cell surface markers. In 2008 Tanner described the tandem attachment of a flow cytometer to an ICP-MS instrument as well as new antibody tags that would allow massively multiplexed analysis of cell markers. By further optimizing the detection speed and sensitivity of this flow coupled to ICP-MS they built the first CyTOF instrument.
The CyTOF instrument was originally owned by the Canadian company DVS Sciences but is now the exclusive product of Fluidigm after their acquisition in 2014 of DVS sciences. In 2022 Fluidigm received a capitol infusion and changed its name to Standard BioTools. There have been 4 iterations of the CyTOF apparatus named CyTOF, CyTOF2, Helios™ and CyTOF XT. The successive improvements were largely in increased detection range and software parameters with the Helios instrument able to detect from metals ranging from yttrium-89 to bismuth-209 and throughput and analyze 2000 events per minute.
Workflow
The Lanthanide group of elements are used for tagging antibodies, as the background in biological samples is very low. When choosing the appropriate isotope for the biomarker, low expression biomarkers should be paired with an isotope that has high signal intensity. If a less pure isotope must be used, it should be paired with a low expression biomarker, to minimize any non specific binding or background.
Isotope polymers are constructed using diethylenetriaminepentaacetic acid (DTPA) chelator to bind ions together. The polymer terminates with a thiol or a maleimide that links it to reduced disulfides in the Fc region of the antibody. Four to five polymers are bound to an antibody, resulting in about 100 isotope atoms per antibody. Tagged antibodies may be in solution, conjugated to beads, or surface immobilized. The cell staining follows the same procedures as in fluorescent staining for flow cytometry.
To distinguish between live and dead cells, cells can be probed with rhodium, an intercalator which can only penetrate dead cells. Then all cells are fixed and stained with iridium, which penetrates all cells, to be able to visualize which are alive.
The cell introduction method of the mass cytometer is an aerosol splitter injection. The cells are then captured in a stream of argon gas, then transported to the plasma where they are vaporized, atomized, and ionized. The cell is now a cloud of ions, which passes into the ion optics center. Then a time of flight analyzer is used to measure the mass of the ions.
Data analysis
Ions are accelerated through the spectrometer in pulses. The electron cloud generated from a single cell is typically 10-150 pulses. The output of a Helios™ run is a binary integrated mass data (IMD) file that contains electron intensities measured from the ions for each mass channel. The continuous pulses must be resolved into individual cell events corresponding to the ion cloud generated from one cell. Each bin of between 10-150 pulses that passes the user set lower convolution threshold, is considered a cell event by the Helios™ software. The lower convolution threshold is the minimal ion count that must be reached across all ion channels to be considered a cell event. The value for this parameter increases with the number of ions being measured and thus more counts are required to define a cell event when more labels are used.
For data analysis, the IMD file is converted into the flow cytometry standard (FCS) format. This file contains the total ion counts for each channel for every cell arranged in a matrix and is the same file generated during flow cytometry. Manual gating of this data can be performed as is done for flow cytometry and most of the tools available for flow cytometry analysis have been ported to CyTOF (See flow cytometry bioinformatics). CyTOF data is typically high dimensional. To delineate relationships between cell populations dimensionality reduction algorithms are often used. Several multidimensional analysis clustering algorithms are common. Popular tools include tSNE, FlowSOM, and the diffusion pseudo time (DPT). The downstream analysis methods depend on the research goals.
Applications
CyTOF provides important information at a single cell level about protein expression, immunophenotype, and functional characterization. It is a valuable tool in immunology, where the large number of parameters has helped to elucidate the workings of this complex system. For example, natural killer cells have diverse properties affected by numerous markers in various combinations, which could not be analyzed with ease prior to this technology. Simultaneously measuring many biomarkers makes it possible to identify over 30 distinct immunophenotype subsets within one complex group of cells. This can help to more fully characterize immune function, infectious disease, and cancers, and understand cells response to therapy.
Advantages and disadvantages
The major advantage of CYTOF is the ability to investigate a larger number of parameters per panel than other cytometry methods. This allows a greater understanding of complex and heterogeneous cell populations, without the need for many complex and overlapping panels. Panels can include up to 45 antibodies, as opposed to the 10 that can be done in conventional flow cytometry but require great expertise to design. However, development of spectral flow cytometry has closed the gap between flow and mass cytometry in terms of the maximum number of antibodies that can be used. More antibodies per panel saves on time, allows understanding of a larger picture, and requires fewer numbers of cells per experiment, which is particularly advantageous when samples are limited such as with tumour studies.
The use of the heavy metal isotopes also lowers background when compared to using fluorescent antibodies. Some cell types, such as myeloid cells, have high rates of autofluorescence that create a lot of background noise in flow cytometry. However, the rare heavy metal isotopes used are not present in biological systems, therefore there is very little or no background seen, and overall sensitivity is increased. The detection overlap between the different heavy metals is also very low compared to the overlap seen in fluorescent cytometry, which makes it much simpler to design a panel of many markers.
Fluorescent dyes are subject to photobleaching, requiring the entire process to happen within a few hours after staining. Metal tagged antibodies however are viable for up to two weeks without losing signal, adding more flexibility to experiments. The stained samples can also be cryopreserved, which may be particularly useful for clinical trials when samples are collected over a longer period of time.
Costs of CyTOF are high, as the metal-tagged antibodies and antibody conjunction kits are expensive. A major downside of CyTOF is that acquisition flow rate is quite slow compared to flow cytometry, by almost an order of magnitude. Because heavy metals are common in laboratory reagents, avoiding contamination during sample preparation is very important.
References
Microorganisms
Mass spectrometry
Biological techniques and tools | CyTOF | [
"Physics",
"Chemistry",
"Biology"
] | 1,879 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"nan",
"Microorganisms",
"Matter"
] |
31,481,256 | https://en.wikipedia.org/wiki/ModBase | ModBase is a database of annotated comparative protein structure models, containing models for more than 3.8 million unique protein sequences. Models are created by the comparative modeling pipeline ModPipe which relies on the MODELLER program.
ModBase is developed in the laboratory of Andrej Sali at UCSF. ModBase models are also accessible through the Protein Model Portal.
See also
Homology modeling
References
External links
http://salilab.org/modbase
Biological databases
Protein methods
Protein structure | ModBase | [
"Chemistry",
"Biology"
] | 101 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry",
"Structural biology",
"Protein structure"
] |
31,485,388 | https://en.wikipedia.org/wiki/Genetic%20editing | Genetic editing (French critique génétique; German genetische Kritik; Spanish crítica genética) is an approach to scholarly editing in which an exemplar is seen as derived from a dossier of other manuscripts and events. The derivation can be through physical cut and paste; writing or drawing in a variety of media; quotation, annotation or correction; acts of physical defacement; etc. Genetic editing aims to reconstruct the sequence of actions on the manuscript and exactly which parts of the manuscript were acted upon where multiple manuscripts have been combined (through for example cut and paste or quotation).
Overview
Whereas traditional scholarly editing can be seen as constructing a new document drawing together and comparing many source documents to cast light on a work, genetic editing closely examines a single extant manuscript and traces back each aspect to cast light on the work. Genetic editing is named by analogy with genetics: manuscripts (individuals) are derived from other manuscripts (or previous states of the same manuscript) with the derivation tree being a partial ordered tree.
Genetic editing models
Genetic editing is strong in European, particularly French and German, textual scholarship. The German genetic editing, which has been associated with synoptic telescoping, has a different method of presentation from the Anglo-American model. The primary model and test case of German editions has been Johann Wolfgang von Goethe. In England and the United States it is William Shakespeare, who did not leave manuscripts of his works. Completed works of genetic editing are known as genetic editions. These documents are similar to documentary editions but it also include information detailing the different phases of writing and rewriting of the manuscript.
The Text Encoding Initiative's XML format has support for encoding of genetic editions.
Examples
HyperNietzsche https://web.archive.org/web/20080706123702/http://www.hypernietzsche.org/
Ulysses: A Critical and Synoptic Edition (1984; Gabler, Steppe, and Melchior)
Transforming Middlemarch: A Genetic Edition of Andrew Davies' 1994 BBC Adaptation of George Eliot's Novel https://middlemarch.dmu.ac.uk
References
Digital humanities
Editing
Textual criticism
Textual scholarship | Genetic editing | [
"Technology"
] | 455 | [
"Digital humanities",
"Computing and society"
] |
21,374,067 | https://en.wikipedia.org/wiki/Naive%20B%20cell | In immunology, a naive B cell is a B cell that has not been exposed to an antigen. These are located in the tonsils, spleen, and primary lymphoid follicles in lymph nodes.
Once exposed to an antigen, the naive B cell either becomes a memory B cell or a plasma cell that secretes antibodies specific to the antigen that was originally bound. Plasma cells do not last long in the circulation; this is in contrast to memory cells that last for very long periods of time. Memory cells do not secrete antibodies until activated by their specific antigen.
Naive B cells play a key role in predicting humoral responses to COVID-19 mRNA vaccines in immunocompromised patients, specifically measuring naive B cell levels could help predict and improve vaccination outcomes.
Notes and references
B cells
Lymphocytes
Human cells
Immunology
Immune system | Naive B cell | [
"Biology"
] | 183 | [
"Organ systems",
"Immunology",
"Immune system"
] |
21,375,423 | https://en.wikipedia.org/wiki/List%20of%20superconductors | The table below shows some of the parameters of common superconductors. X:Y means material X doped with element Y, TC is the highest reported transition temperature in kelvins and HC is a critical magnetic field in tesla. "BCS" means whether or not the superconductivity is explained within the BCS theory.
List
Notes
References
External links
A review of 700 potential superconductors
Superconductivity
Superconductors | List of superconductors | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 94 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Superconductors",
"Electrical resistance and conductance"
] |
21,376,719 | https://en.wikipedia.org/wiki/Oxygen%20plant | Oxygen plants are industrial systems designed to generate oxygen. They typically use air as a feedstock and separate it from other components of air using pressure swing adsorption or membrane separation techniques. Such plants are distinct from cryogenic separation plants which separate and capture all the components of air.
Application
Oxygen finds broad application in various technological processes and in almost all industry branches. The primary oxygen application is associated with its capability of sustaining burning process, and the powerful oxidant properties.
Due to that, oxygen has become widely used in the metal processing, welding, cutting and brazing processes. In the chemical and petrochemical industries, as well as in the oil and gas sector oxygen is used in commercial volumes as an oxidizer in chemical reactions.
Metal gas welding, cutting and brazing - The use of oxygen in gas-flame operations, such as metal welding, cutting and brazing is one of the most significant and common applications of this gas. Oxygen allows generating high-temperature flame in welding torches thus ensuring high quality and speed of work performance.
Metal industry - Oxygen is heavily used in the metal industry where it helps to increase burning temperature by the production of ferrous and non-ferrous metals and significantly improve the overall process efficiency.
Chemical and petrochemical industries - In the chemical and petrochemical industries, oxygen is widely used for oxidation of raw chemicals for recovery of nitric acid, ethylene oxide, propylene oxide, vinyl chloride and other important chemical compounds.
Oil and gas industry - In the oil and gas industry, oxygen finds application as a means for viscosity improvement and enhancement of oil-and-gas flow properties. Oxygen is also used for boosting production capacity of oil cracking plants, efficiency of high-octane components processing, as well as for the reduction of sulfuric deposits in refineries.
Fish farming - The use of oxygen in the fish farming helps increase the survival and fertility ratios and reduce the incubation period. Along with fish culture, oxygen is applied for shrimps, crabs and mussels rearing.
Glass manufacture - In glass furnaces oxygen is effectively used for burning temperature increase and burning processes improvement.
Waste management - The use of oxygen in incinerators allows significantly increased flame temperatures and eventually ensures enhanced cost efficiency and incinerator production capacity.
Medicine - Medical oxygen therapy may be administered to patients for various reasons such as low oxygen saturation or to ease respiratory distress.
Adsorption technology
Adsorption principle
Gas separation by adsorption systems is based on differential rates of adsorption of the components of a gas mixture into a solid adsorbent.
Temperature and pressure influence
The current means of gaseous oxygen production from air by the use of adsorption technology produce a high fraction of oxygen as their output. The mechanism of operation of a modern oxygen adsorption plant is based on the variation of uptake of a particular gas component by the adsorbent as the temperature and partial pressure of the gas is changed.
The gas adsorption and adsorbent regeneration processes may therefore be regulated by varying of the pressure and temperature parameters.
Pressure swing adsorption
The oxygen plant flow process is arranged in such a way that highly absorbable gas mixture components are taken in by adsorbent, while low absorbable and non-absorbable components go through the plant. Today, there exist three methods of arranging the adsorption-based air separation process with the use of swing technologies: pressure (PSA), vacuum (VSA) and mixed (VPSA) ones. In the pressure swing adsorption flow processes, oxygen is recovered under above-atmospheric pressure and regeneration is achieved under atmospheric pressure. In vacuum swing adsorption flow processes, oxygen is recovered under atmospheric pressure, and regeneration is achieved under negative pressure. The mixed systems operation combines pressure variations from positive to negative.
Adsorption oxygen plants
The adsorption oxygen plants produce 5 to 5,000 normal cubic meters per hour of oxygen with a purity of 93-95%. These systems, designated for indoor operation, are set to effectively produce gaseous oxygen from atmospheric air.
An unquestionable advantage of adsorption-based oxygen plants is the low cost of oxygen produced in the cases where there are no rigid requirements to the product oxygen purity.
Structurally, the adsorption oxygen plant consists of several adsorbers, the compressor unit, pre-purifier unit, valve system and the plant control system.
A simple adsorber is a column filled with layers of specially selected adsorbents – granular substances preferentially adsorbing highly adsorbable components of a gas mixture.
Where gaseous oxygen purity is required at the level of 90-95% with the capacity of up to 5,000 Nm3 per hour, adsorption oxygen plants are the optimal choice. This oxygen purity may also be obtained through the use of systems based on the cryogenic technology; however, cryogenic plants are more cumbersome and complex in operation.
Membrane technology
Innovation technology available today
Some companies produce high-efficiency systems for oxygen production from atmospheric air with the help of membrane technology.
Membrane operation principle
The basis of gas media separation with the use of membrane systems is the difference in velocity with which various gas mixture components permeate membrane substance. The driving force behind the gas separation process is the difference in partial pressures on different membrane sides.
Membrane cartridge
A modern gas separation membrane used by GRASYS is no longer a flat plate, but is formed by hollow fibers. Membrane consists of a porous polymer fiber with the gas separation layer applied to its external surface. Structurally, a hollow fiber membrane is configured as a cylindrical cartridge representing a spool with specifically reeled polymer fiber.
Compressor and vacuum technologies
Due to the membrane material high permeability for oxygen in contrast to nitrogen, the design of membrane oxygen complexes requires a special approach. Basically, there are two membrane-based oxygen production technologies: compressor and vacuum ones.
In the case of compressor technology, air is supplied into the fiber space under excess pressure, oxygen exits the membrane under slight excess pressure, and where necessary, is pressurized by booster compressor to the required pressure level. By the use of vacuum technology, a vacuum pump is used for the achievement of partial pressures difference.
Membrane oxygen plants
Designed for indoor operation, membrane oxygen plants allow efficient air enrichment with oxygen up to the concentration of 30-45%. The complexes are rated to 5 to 5,000 nm3/hr of oxygenated air.
In the membrane oxygen plant, gas separation is achieved in the gas separation module composed of hollow-fiber membranes and representing the plant critical and high-technology unit. Apart from the gas separation unit, other important technical components are the booster compressor or vacuum pump, pre-purifier unit, and the plant control system.
The adoption of membrane systems for air enrichment purposes promises multiple oxygen savings where the oxygen concentration of 30-45% is sufficient to cover process needs. In addition to customer saving on the product oxygen cost, there is a collateral economic effect based on extremely low operating costs.
With the incorporation of the membrane technology, oxygen plants have outstanding technical characteristics. Membrane oxygen plants are highly reliable due to the absence of moving parts in the gas separation module.
The systems are very simple in operation – control of all operating parameters is carried out automatically. Because of the plant's high automation degree, staffed oversight is not required during its operation.
Membrane oxygen plants are finding increasingly broad application in various industries all over the world. With moderate requirements to oxygen purity in product - up to 30-45%, membrane systems generally prove more economically sound than adsorption and cryogenic systems. In addition, membrane plants are much simpler in operation and more reliable.
Advantages of adsorption and membrane oxygen plants
Complete automation and simplicity of operation;
Staffed oversight is not required during operation;
Enhanced failure safety and reliability;
Quick start and stop;
Moderate dimensions and light weight;
Low noise level;
Extended operational life;
Low operating costs;
No special workshop requirements;
Easy installation and integration into an existing air system.
Disadvantages
Relativity low oxygen purity - 93-95% for adsorption and 30-45% for membrane plants;
Limited capacity.
High power consumption.
References
Oxygen
Industrial gases
Gas technologies | Oxygen plant | [
"Chemistry"
] | 1,679 | [
"Chemical process engineering",
"Industrial gases"
] |
21,377,293 | https://en.wikipedia.org/wiki/Environmental%20engineering%20science | Environmental engineering science (EES) is a multidisciplinary field of engineering science that combines the biological, chemical and physical sciences with the field of engineering. This major traditionally requires the student to take basic engineering classes in fields such as thermodynamics, advanced math, computer modeling and simulation and technical classes in subjects such as statics, mechanics, hydrology, and fluid dynamics. As the student progresses, the upper division elective classes define a specific field of study for the student with a choice in a range of science, technology and engineering related classes.
Difference with related fields
As a recently created program, environmental engineering science has not yet been incorporated into the terminology found among environmentally focused professionals. In the few engineering colleges that offer this major, the curriculum shares more classes in common with environmental engineering than it does with environmental science. Typically, EES students follow a similar course curriculum with environmental engineers until their fields diverge during the last year of college. The majority of the environmental engineering students must take classes designed to connect their knowledge of the environment to modern building materials and construction methods. This is meant to direct the environmental engineer into a field where they will more than likely assist in building treatment facilities, preparing environmental impact assessments or helping to mitigate air pollution from specific point sources.
Meanwhile, the environmental engineering science student will choose a direction for their career. From the range of electives they have to choose from, these students can move into a fields such as the design of nuclear storage facilities, bacterial bioreactors or environmental policies. These students combine the practical design background of an engineer with the detailed theory found in many of the biological and physical sciences.
Description at universities
Stanford University
The Civil and Environmental Engineering department at Stanford University provides the following description for their program in Environmental Engineering and Science:
The Environmental Engineering and Science (EES) program focuses on the chemical and biological processes involved in water quality engineering, water and air pollution, remediation and hazardous substance control, human exposure to pollutants, environmental biotechnology, and environmental protection.
UC Berkeley
The College of Engineering at UC Berkeley defines Environmental Engineering Science, including the following:
This is a multidisciplinary field requiring an integration of physical, chemical and biological principles with engineering analysis for environmental protection and restoration. The program incorporates courses from many departments on campus to create a discipline that is rigorously based in science and engineering, while addressing a wide variety of environmental issues. Although an environmental engineering option exists within the civil engineering major, the engineering science curriculum provides a more broadly based foundation in the sciences than is possible in civil engineering
Massachusetts Institute of Technology
At MIT, the major is described in their curriculum, including the following:
The Bachelor of Science in Environmental Engineering Science emphasizes the fundamental physical, chemical, and biological processes necessary for understanding the interactions between man and the environment. Issues considered include the provision of clean and reliable water supplies, flood forecasting and protection, development of renewable and nonrenewable energy sources, causes and implications of climate change, and the impact of human activities on natural cycles
University of Florida
The College of Engineering at UF defines Environmental Engineering Science as follows:
The broad undergraduate environmental engineering curriculum of EES has earned the department a ranking as a leading undergraduate program. The ABET accredited engineering bachelor's degree is comprehensively based on physical, chemical, and biological principles to solve environmental problems affecting air, land, and water resources. An advising scheme including select faculty, led by the undergraduate coordinator, guides each student through the program.
The program educational objectives of the EES program at the University of Florida are to produce engineering practitioners and graduate students who 3-5 years after graduation:
Continue to learn, develop and apply their knowledge and skills to identify, prevent, and solve environmental problems.
Have careers that benefit society as a result of their educational experiences in science, engineering analysis and design, as well as in their social and cultural studies.
Communicate and work effectively in all work settings including those that are multidisciplinary.
Lower division coursework
Lower division coursework in this field requires the student to take several laboratory-based classes in calculus-based physics, chemistry, biology, programming and analysis. This is intended to give the student background information in order to introduce them to the engineering fields and to prepare them for more technical information in their upper division coursework.
Upper division coursework
The upper division classes in Environmental Engineering Science prepares the student for work in the fields of engineering and science with coursework in subjects including the following:
Fluid mechanics
Mechanics of materials
Thermodynamics
Environmental engineering
Advanced math and statistics
Geology
Physical, organic and atmospheric chemistry
Biochemistry
Microbiology
Ecology
Electives
Process engineering
On this track, students are introduced to the fundamental reaction mechanisms in the field of chemical and biochemical engineering.
Resource engineering
For this track, students take classes introducing them to ways to conserve natural resources. This can include classes in water chemistry, sanitation, combustion, air pollution and radioactive waste management.
Geoengineering
This examines geoengineering in detail.
Ecology
This prepares the students for using their engineering and scientific knowledge to solve the interactions between plants, animals and the biosphere.
Biology
This includes further education about microbial, molecular and cell biology. Classes can include cell biology, virology, microbial and plant biology
Policy
This covers in more detail ways the environment can be protected through political means. This is done by introducing students to qualitative and quantitative tools in classes such as economics, sociology, political science and energy and resources.
Post graduation work
The multidisciplinary approach in Environmental Engineering Science gives the student expertise in technical fields related to their own personal interest. While some graduates choose to use this major to go to graduate school, students who choose to work often go into the fields of civil and environmental engineering, biotechnology, and research. However, the less technical math, programming and writing background gives the students opportunities to pursue IT work and technical writing.
See also
Civil engineering
Environmental engineering
Environmental science
Sustainability
Green building
Sustainable engineering
Notes
References
"MIT Course Catalog: Department of Civil and Environmental Engineering." Massachusetts Institute of Technology. <http://web.mit.edu/catalogue/degre.engin.civil.shtml>.
2008-2009 Announcement. Brochure. Berkeley, 2008. Engineering Announcement 2008-2009. University of California, Berkeley. <https://web.archive.org/web/20081203005457/http://coe.berkeley.edu/students/EngAnn08.pdf>.
External links
Engineering Engineering and Science program at Stanford University
What people go on to do in Engineering Science at UC Berkeley
Curriculum at University of Florida
Curriculum at MIT
Curriculum at University of Illinois
Engineering disciplines
Civil engineering
Environmental science | Environmental engineering science | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,362 | [
"Chemical engineering",
"Construction",
"Civil engineering",
"nan",
"Environmental engineering"
] |
21,378,595 | https://en.wikipedia.org/wiki/SeHCAT | SeHCAT (23-seleno-25-homotaurocholic acid, selenium homocholic acid taurine, or tauroselcholic acid) is a drug used in a clinical test to diagnose bile acid malabsorption.
Development
SeHCAT is a taurine-conjugated bile acid analog which was synthesized for use as a radiopharmaceutical to investigate in vivo the enterohepatic circulation of bile salts. By incorporating the gamma-emitter 75Se into the SeHCAT molecule, the retention in the body or the loss of this compound into the feces could be studied easily using a standard gamma camera, available in most clinical nuclear medicine departments.
SeHCAT has been shown to be absorbed from the gut and excreted into the bile at the same rate as cholic acid, one of the major natural bile acids in humans. It undergoes secretion into the biliary tree, gallbladder and intestine in response to food, and is reabsorbed efficiently in the ileum, with kinetics similar to natural bile acids. It was soon shown to be the most convenient and accurate method available to assess and measure bile acid turnover in the intestine. SeHCAT testing was commercially developed by Amersham International Ltd (Amersham plc is now part of GE Healthcare Medical Diagnostics division) for clinical use to investigate malabsorption in patients with diarrhea. This test has replaced 14C-labeled glycocholic acid (or taurocholic acid) breath tests and fecal bile acid measurements, which now have no place in the routine clinical investigation of malabsorption.
Procedure
A capsule containing radiolabelled 75SeHCAT (with 370 kBq of Selenium-75 and less than 0.1 mg SeHCAT) is taken orally with water, to ensure passage of the capsule into the gastrointestinal tract. The physical half life of 75Se is approximately 118 days; activity is adjusted to a standard reference date.
Patients may be given instructions to fast prior to capsule administration; there is significant variation in clinical practice in this regard. The effective dose of radiation for an adult given 370 kBq of SeHCAT is 0.26 mSv. (For comparison, the radiation exposure from an abdominal CT scan is quoted at 5.3 mSv and annual background exposure in the UK 1-3 mSv.) Measurements were originally performed with a whole-body counter but are usually performed now with an uncollimated gamma camera. The patient is scanned supine or prone with anterior and posterior acquisition from head to thigh 1 to 3 hours after taking the capsule. Scanning is repeated after 7 days. Background values are subtracted and care must be taken to avoid external sources of radiation in a nuclear medicine department.
From these measurements, the percent retention of SeHCAT at 7 days is calculated. A 7-day SeHCAT retention value greater than 15% is considered to be normal, with values less than 15% signifying excessive bile acid loss, as found in bile acid malabsorption.
With more frequent measurements, it is possible to calculate SeHCAT retention whole-body half-life; this is not routinely measured in a clinical setting. A half-life of greater than 2.8 days has been quoted as normal.
Clinical use
The SeHCAT test is used to investigate patients with suspected bile acid malabsorption, who usually experience chronic diarrhea, often passing watery feces 5 to 10 times each day. When ileum has been removed following surgery, or is inflamed in Crohn's disease, the 7-day SeHCAT retention usually is abnormal, and most of these patients will benefit from treatment with bile acid sequestrants. The enterohepatic circulation of bile acids is reduced in these patients with ileal abnormalities and, as the normal bile acid retention exceeds 95%, only a small degree of change is needed. Bile acid malabsorption can also be secondary to cholecystectomy, vagotomy and other disorders affecting intestinal motility or digestion such as radiation enteritis, celiac disease, and small intestinal bacterial overgrowth.
A similar picture of chronic diarrhea, an abnormal SeHCAT retention and a response to bile acid sequestrants, in the absence of other disorders of the intestine, is characteristic of idiopathic bile acid malabsorption – also called primary bile acid diarrhea. These patients are frequently misdiagnosed as having the irritable bowel syndrome, as clinicians fail to recognize the condition, do not think of performing a SeHCAT test, or do not have it available.
There have been at least 18 studies of the use of SeHCAT testing in diarrhea-predominant irritable bowel syndrome patients. When these data were combined, 32% of 1223 patients had a SeHCAT 7-day retention of less than 10%, and 80% of these reported a response to cholestyramine, a bile acid sequestrant.
References
External links
GE Healthcare SeHCAT site
Diagnostic gastroenterology
Radiopharmaceuticals
Gastroenterology
Bile acids
Cholanes
Organoselenium compounds
Selenium(−II) compounds
Selenoethers | SeHCAT | [
"Chemistry"
] | 1,102 | [
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
22,826,719 | https://en.wikipedia.org/wiki/Biocell%20Center | Biocell Center is an international company specializing in the cryopreservation and private banking of amniotic fluid stem cells. The company is headquartered in Italy with several international locations and is involved with numerous partnerships and research studies of amniotic fluid stem cells.
In 2008, Biocell Center opened the first amniotic fluid stem cell bank in the world, and in 2009 it opened the first amniotic fluid stem cell bank in the United States for private storage of stem cells obtained during genetic amniocentesis.
Biocell Center’s Italian headquarters in Busto Arsizio and Milan (Italy), are managed by company president Marco Reguzzoni and scientific director Giuseppe Simoni, and additional subsidiaries are located in Lugano (Switzerland), and Natick, Massachusetts (USA) - Boston area.
Biocell Center is currently collaborating with Harvard University and the Caritas Christi Health Care hospital network on amniotic stem cell research.
References
MassDevice
PRNewswire
WLA
Euro Stem Cell
Swiss news
Smart Brief
Associated press
Stem cell research
Biotechnology companies of Italy
Italian brands | Biocell Center | [
"Chemistry",
"Biology"
] | 225 | [
"Translational medicine",
"Tissue engineering",
"Stem cell research"
] |
22,830,139 | https://en.wikipedia.org/wiki/Light%20non-aqueous%20phase%20liquid | A light non-aqueous phase liquid (LNAPL) is a groundwater contaminant that is not soluble in water and has a lower density than water, in contrast to a DNAPL which has a higher density than water. Once a LNAPL pollution infiltrates the ground, it will stop at the depth of the water table because of its positive buoyancy. Efforts to locate and remove LNAPLs are relatively less expensive and easier than for DNAPLs because LNAPLs float on top of the water table.
Examples of LNAPLs are benzene, toluene, xylene, and other hydrocarbons.
See also
DNAPL
LNAPL transmissivity
External links
LNAPL Definition from the USGS
Water pollution
Water chemistry
Hydrogeology | Light non-aqueous phase liquid | [
"Chemistry",
"Environmental_science"
] | 165 | [
"Hydrology",
"Hydrogeology",
"nan",
"Water pollution"
] |
554,087 | https://en.wikipedia.org/wiki/Causal%20system | In control theory, a causal system (also known as a physical or nonanticipative system) is a system where the output depends on past and
current inputs but not future inputs—i.e., the output depends only on the input for values of .
The idea that the output of a function at any time depends only on past and present values of input is defined by the property commonly referred to as causality. A system that has some dependence on input values from the future (in addition to possible dependence on past or current input values) is termed a non-causal or acausal system, and a system that depends solely on future input values is an anticausal system. Note that some authors have defined an anticausal system as one that depends solely on future and present input values or, more simply, as a system that does not depend on past input values.
Classically, nature or physical reality has been considered to be a causal system. Physics involving special relativity or general relativity require more careful definitions of causality, as described elaborately in Causality (physics).
The causality of systems also plays an important role in digital signal processing, where filters are constructed so that they are causal, sometimes by altering a non-causal formulation to remove the lack of causality so that it is realizable. For more information, see causal filter.
For a causal system, the impulse response of the system must use only the present and past values of the input to determine the output. This requirement is a necessary and sufficient condition for a system to be causal, regardless of linearity. Note that similar rules apply to either discrete or continuous cases. By this definition of requiring no future input values, systems must be causal to process signals in real time.
Mathematical definitions
Definition 1: A system mapping to is causal if and only if, for any pair of input signals , and any choice of , such that
the corresponding outputs satisfy
Definition 2: Suppose is the impulse response of any system described by a linear constant coefficient differential equation. The system is causal if and only if
otherwise it is non-causal.
Examples
The following examples are for systems with an input and output .
Examples of causal systems
Memoryless system
Memory-enabled system
Autoregressive filter
Examples of non-causal (acausal) systems
Central moving average
Examples of anti-causal systems
Look-ahead
Additional Examples of Causal Systems
Linear Time-Invariant (LTI) System
Moving Average Filter
Additional Examples of Non-Causal (Acausal) Systems
Smoothing Filter
Ideal Low-Pass Filter
Additional Examples of Anti-Causal Systems
Future Input Dependence
See also
Causal filter
Causal model
References
Classical control theory
Digital signal processing
Systems theory
Dynamical systems
de:Systemtheorie (Ingenieurwissenschaften)#Kausale Systeme | Causal system | [
"Physics",
"Mathematics"
] | 568 | [
"Mechanics",
"Dynamical systems"
] |
554,248 | https://en.wikipedia.org/wiki/Nonholonomic%20system | A nonholonomic system in physics and mathematics is a physical system whose state depends on the path taken in order to achieve it. Such a system is described by a set of parameters subject to differential constraints and non-linear constraints, such that when the system evolves along a path in its parameter space (the parameters varying continuously in values) but finally returns to the original set of parameter values at the start of the path, the system itself may not have returned to its original state. Nonholonomic mechanics is an autonomous division of Newtonian mechanics.
Details
More precisely, a nonholonomic system, also called an anholonomic system, is one in which there is a continuous closed circuit of the governing parameters, by which the system may be transformed from any given state to any other state. Because the final state of the system depends on the intermediate values of its trajectory through parameter space, the system cannot be represented by a conservative potential function as can, for example, the inverse square law of the gravitational force. This latter is an example of a holonomic system: path integrals in the system depend only upon the initial and final states of the system (positions in the potential), completely independent of the trajectory of transition between those states. The system is therefore said to be integrable, while the nonholonomic system is said to be nonintegrable. When a path integral is computed in a nonholonomic system, the value represents a deviation within some range of admissible values and this deviation is said to be an anholonomy produced by the specific path under consideration. This term was introduced by Heinrich Hertz in 1894.
The general character of anholonomic systems is that of implicitly dependent parameters. If the implicit dependency can be removed, for example by raising the dimension of the space, thereby adding at least one additional parameter, the system is not truly nonholonomic, but is simply incompletely modeled by the lower-dimensional space. In contrast, if the system intrinsically cannot be represented by independent coordinates (parameters), then it is truly an anholonomic system. Some authors make much of this by creating a distinction between so-called internal and external states of the system, but in truth, all parameters are necessary to characterize the system, be they representative of "internal" or "external" processes, so the distinction is in fact artificial. However, there is a very real and irreconcilable difference between physical systems that obey conservation principles and those that do not. In the case of parallel transport on a sphere, the distinction is clear: a Riemannian manifold has a metric fundamentally distinct from that of a Euclidean space. For parallel transport on a sphere, the implicit dependence is intrinsic to the non-euclidean metric. The surface of a sphere is a two-dimensional space. By raising the dimension, we can more clearly see the nature of the metric, but it is still fundamentally a two-dimensional space with parameters irretrievably entwined in dependency by the Riemannian metric.
By contrast, one can consider an X-Y plotter as an example of a holonomic system where the state of the system's mechanical components will have a single fixed configuration for any given position of the plotter pen. If the pen relocates between positions 0,0 and 3,3, the mechanism's gears will have the same final positions regardless of whether the relocation happens by the mechanism first incrementing 3 units on the x-axis and then 3 units on the y-axis, incrementing the Y-axis position first, or operating any other sequence of position-changes that result in a final position of 3,3. Since the final state of the machine is the same regardless of the path taken by the plotter-pen to get to its new position, the end result can be said not to be path-dependent. If we substitute a turtle plotter, the process of moving the pen from 0,0 to 3,3 can result in the gears of the robot's mechanism finishing in different positions depending on the path taken to move between the two positions. See this very similar gantry crane example for a mathematical explanation of why such a system is holonomic.
History
N. M. Ferrers first suggested to extend the equations of motion with nonholonomic constraints in 1871.
He introduced the expressions for Cartesian velocities in terms of generalized velocities.
In 1877, E. Routh wrote the equations with the Lagrange multipliers. In the third edition of his book for linear non-holonomic constraints of rigid bodies, he introduced the form with multipliers, which is now called the Lagrange equations of the second kind with multipliers. The terms the holonomic and nonholonomic systems were introduced by Heinrich Hertz in 1894.
In 1897, S. A. Chaplygin first suggested to form the equations of motion without Lagrange multipliers.
Under certain linear constraints, he introduced on the left-hand side of the equations of motion a group of extra terms of the Lagrange-operator type.
The remaining extra terms characterise the nonholonomicity of system and they become zero when the given constrains are integrable.
In 1901 P. V.Voronets generalised Chaplygin's work to the cases of noncyclic holonomic coordinates
and of nonstationary constraints.
Constraints
Consider a system of particles with positions for with respect to a given reference frame. In classical mechanics, any constraint that is not expressible as
is a non-holonomic constraint. In other words, a nonholonomic constraint is nonintegrable and in Pfaffian form:
is the number of coordinates.
is the number of constraint equations.
are coordinates.
are coefficients.
In order for the above form to be nonholonomic, it is also required that the left hand side neither be a total differential nor be able to be converted into one, perhaps via an integrating factor.
For virtual displacements only, the differential form of the constraint is
It is not necessary for all non-holonomic constraints to take this form, in fact it may involve higher derivatives or inequalities. A classical example of an inequality constraint is that of a particle placed on the surface of a sphere, yet is allowed to fall off it:
is the distance of the particle from the centre of the sphere.
is the radius of the sphere.
Examples
Rolling wheel
A wheel (sometimes visualized as a unicycle or a rolling coin) is a nonholonomic system.
Layperson's explanation
Consider the wheel of a bicycle that is parked in a certain place (on the ground). Initially the inflation valve is at a certain position on the wheel. If the bicycle is ridden around, and then parked in exactly the same place, the valve will almost certainly not be in the same position as before. Its new position depends on the path taken. If the wheel were holonomic, then the valve stem would always end up in the same position as long as the wheel were always rolled back to the same location on the Earth. Clearly, however, this is not the case, so the system is nonholonomic.
Mathematical explanation
It is possible to model the wheel mathematically with a system of constraint equations, and then prove that that system is nonholonomic.
First, we define the configuration space. The wheel can change its state in three ways: having a different rotation about its axle, having a different steering angle, and being at a different location. We may say that is the rotation about the axle, is the steering angle relative to the -axis, and and define the spatial position. Thus, the configuration space is:
We must now relate these variables to each other. We notice that as the wheel changes its rotation, it changes its position. The change in rotation and position implying velocities must be present, we attempt to relate angular velocity and steering angle to linear velocities by taking simple time-derivatives of the appropriate terms:
The velocity in the direction is equal to the angular velocity times the radius times the cosine of the steering angle, and the velocity is similar. Now we do some algebraic manipulation to transform the equation to Pfaffian form so it is possible to test whether it is holonomic, starting with:
Then, let's separate the variables from their coefficients (left side of equation, derived from above). We also realize that we can multiply all terms by so we end up with only the differentials (right side of equation):
The right side of the equation is now in Pfaffian form:
We now use the universal test for holonomic constraints. If this system were holonomic, we might have to do up to eight tests. However, we can use mathematical intuition to try our best to prove that the system is nonholonomic on the first test. Considering the test equation is:
we can see that if any of the terms , , or were zero, then that part of the test equation would be trivial to solve and would be equal to zero. Therefore, it is often best practice to have the first test equation have as many non-zero terms as possible to maximize the chance of the sum of them not equaling zero. Therefore, we choose:
We substitute into our test equation:
and simplify:
We can easily see that this system, as described, is nonholonomic, because is not always equal to zero.
Additional conclusions
We have completed our proof that the system is nonholonomic, but our test equation gave us some insights about whether the system, if further constrained, could be holonomic. Many times test equations will return a result like implying the system could never be constrained to be holonomic without radically altering the system, but in our result we can see that can be equal to zero, in two different ways:
, the radius of the wheel, can be zero. This is not helpful as the system in practice would lose all of its degrees of freedom.
can be zero by setting equal to zero. This implies that if the wheel were not allowed to turn and had to move only in a straight line at all times, it would be a holonomic system.
There is one thing that we have not yet considered however, that to find all such modifications for a system, one must perform all eight test equations (four from each constraint equation) and collect all the failures to gather all requirements to make the system holonomic, if possible. In this system, out of the seven additional test equations, an additional case presents itself:
This does not pose much difficulty, however, as adding the equations and dividing by results in:
which with some simple algebraic manipulation becomes:
which has the solution (from ).
Refer back to the layman's explanation above where it is said, "[The valve stem's] new position depends on the path taken. If the wheel were holonomic, then the valve stem would always end up in the same position as long as the wheel were always rolled back to the same location on the Earth. Clearly, however, this is not the case, so the system is nonholonomic." However it is easy to visualize that if the wheel were only allowed to roll in a perfectly straight line and back, the valve stem would end up in the same position! In fact, moving parallel to the given angle of is not actually necessary in the real world as the orientation of the coordinate system itself is arbitrary. The system can become holonomic if the wheel moves only in a straight line at any fixed angle relative to a given reference. Thus, we have not only proved that the original system is nonholonomic, but we also were able to find a restriction that can be added to the system to make it holonomic.
However, there is something mathematically special about the restriction of for the system to make it holonomic, as in a Cartesian grid. Combining the two equations and eliminating , we indeed see that and therefore one of those two coordinates is completely redundant. We already know that the steering angle is a constant, so that means the holonomic system here needs to only have a configuration space of . As discussed here, a system that is modellable by a Pfaffian constraint must be holonomic if the configuration space consists of two or fewer variables. By modifying our original system to restrict it to have only two degrees of freedom and thus requiring only two variables to be described, and assuming it can be described in Pfaffian form (which in this example we already know is true), we are assured that it is holonomic.
Rolling sphere
This example is an extension of the 'rolling wheel' problem considered above.
Consider a three-dimensional orthogonal Cartesian coordinate frame, for example, a level table top with a point marked on it for the origin, and the x and y axes laid out with pencil lines. Take a sphere of unit radius, for example, a ping-pong ball, and mark one point B in blue. Corresponding to this point is a diameter of the sphere, and the plane orthogonal to this diameter positioned at the center C of the sphere defines a great circle called the equator associated with point B. On this equator, select another point R and mark it in red. Position the sphere on the z = 0 plane such that the point B is coincident with the origin, C is located at x = 0, y = 0, z = 1, and R is located at x = 1, y = 0, and z = 1, i.e. R extends in the direction of the positive x axis. This is the initial or reference orientation of the sphere.
The sphere may now be rolled along any continuous closed path in the z = 0 plane, not necessarily a simply connected path, in such a way that it neither slips nor twists, so that C returns to x = 0, y = 0, z = 1. In general, point B is no longer coincident with the origin, and point R no longer extends along the positive x axis. In fact, by selection of a suitable path, the sphere may be re-oriented from the initial orientation to any possible orientation of the sphere with C located at x = 0, y = 0, z = 1. The system is therefore nonholonomic. The anholonomy may be represented by the doubly unique quaternion (q and −q) which, when applied to the points that represent the sphere, carries points B and R to their new positions.
Foucault pendulum
An additional example of a nonholonomic system is the Foucault pendulum. In the local coordinate frame the pendulum is swinging in a vertical plane with a particular orientation with respect to geographic north at the outset of the path. The implicit trajectory of the system is the line of latitude on the Earth where the pendulum is located. Even though the pendulum is stationary in the Earth frame, it is moving in a frame referred to the Sun and rotating in synchrony with the Earth's rate of revolution, so that the only apparent motion of the pendulum plane is that caused by the rotation of the Earth. This latter frame is considered to be an inertial reference frame, although it too is non-inertial in more subtle ways. The Earth frame is well known to be non-inertial, a fact made perceivable by the apparent presence of centrifugal forces and Coriolis forces.
Motion along the line of latitude is parameterized by the passage of time, and the Foucault pendulum's plane of oscillation appears to rotate about the local vertical axis as time passes. The angle of rotation of this plane at a time t with respect to the initial orientation is the anholonomy of the system. The anholonomy induced by a complete circuit of latitude is proportional to the solid angle subtended by that circle of latitude. The path need not be constrained to latitude circles. For example, the pendulum might be mounted in an airplane. The anholonomy is still proportional to the solid angle subtended by the path, which may now be quite irregular. The Foucault pendulum is a physical example of parallel transport.
Linear polarized light in an optical fiber
Take a length of optical fiber, say three meters, and lay it out in an absolutely straight line. When a vertically polarized beam is introduced at one end, it emerges from the other end, still polarized in the vertical direction. Mark the top of the fiber with a stripe, corresponding with the orientation of the vertical polarization.
Now, coil the fiber tightly around a cylinder ten centimeters in diameter. The path of the fiber now describes a helix which, like the circle, has constant curvature. The helix also has the interesting property of having constant torsion. As such the result is a gradual rotation of the fiber about the fiber's axis as the fiber's centerline progresses along the helix. Correspondingly, the stripe also twists about the axis of the helix.
When linearly polarized light is again introduced at one end, with the orientation of the polarization aligned with the stripe, it will, in general, emerge as linear polarized light aligned not with the stripe, but at some fixed angle to the stripe, dependent upon the length of the fiber, and the pitch and radius of the helix. This system is also nonholonomic, for we can easily coil the fiber down in a second helix and align the ends, returning the light to its point of origin. The anholonomy is therefore represented by the deviation of the angle of polarization with each circuit of the fiber. By suitable adjustment of the parameters, it is clear that any possible angular state can be produced.
Robotics
In robotics, nonholonomic has been particularly studied in the scope of motion planning and feedback linearization for mobile robots.
See also
Holonomic constraint
Bicycle and motorcycle dynamics
Falling cat problem
Goryachev–Chaplygin top
Parallel parking problem
Pfaffian constraint
Udwadia–Kalaba equation
Lie group integrator
References
Algebraic topology
Differential geometry
Differential topology
Dynamical systems | Nonholonomic system | [
"Physics",
"Mathematics"
] | 3,755 | [
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Mechanics",
"Differential topology",
"Dynamical systems"
] |
555,119 | https://en.wikipedia.org/wiki/Displacement%20current | In electromagnetism, displacement current density is the quantity appearing in Maxwell's equations that is defined in terms of the rate of change of , the electric displacement field. Displacement current density has the same units as electric current density, and it is a source of the magnetic field just as actual current is. However it is not an electric current of moving charges, but a time-varying electric field. In physical materials (as opposed to vacuum), there is also a contribution from the slight motion of charges bound in atoms, called dielectric polarization.
The idea was conceived by James Clerk Maxwell in his 1861 paper On Physical Lines of Force, Part III in connection with the displacement of electric particles in a dielectric medium. Maxwell added displacement current to the electric current term in Ampère's circuital law. In his 1865 paper A Dynamical Theory of the Electromagnetic Field Maxwell used this amended version of Ampère's circuital law to derive the electromagnetic wave equation. This derivation is now generally accepted as a historical landmark in physics by virtue of uniting electricity, magnetism and optics into one single unified theory. The displacement current term is now seen as a crucial addition that completed Maxwell's equations and is necessary to explain many phenomena, most particularly the existence of electromagnetic waves.
Explanation
The electric displacement field is defined as:
where:
is the permittivity of free space;
is the electric field intensity; and
is the polarization of the medium.
Differentiating this equation with respect to time defines the displacement current density, which therefore has two components in a dielectric:(see also the "displacement current" section of the article "current density")
The first term on the right hand side is present in material media and in free space. It doesn't necessarily come from any actual movement of charge, but it does have an associated magnetic field, just as a current does due to charge motion. Some authors apply the name displacement current to the first term by itself.
The second term on the right hand side, called polarization current density, comes from the change in polarization of the individual molecules of the dielectric material. Polarization results when, under the influence of an applied electric field, the charges in molecules have moved from a position of exact cancellation. The positive and negative charges in molecules separate, causing an increase in the state of polarization . A changing state of polarization corresponds to charge movement and so is equivalent to a current, hence the term "polarization current". Thus,
This polarization is the displacement current as it was originally conceived by Maxwell. Maxwell made no special treatment of the vacuum, treating it as a material medium. For Maxwell, the effect of was simply to change the relative permittivity in the relation .
The modern justification of displacement current is explained below.
Isotropic dielectric case
In the case of a very simple dielectric material the constitutive relation holds:
where the permittivity is the product of:
, the permittivity of free space, or the electric constant; and
, the relative permittivity of the dielectric.
In the equation above, the use of accounts for
the polarization (if any) of the dielectric material.
The scalar value of displacement current may also be expressed in terms of electric flux:
The forms in terms of scalar are correct only for linear isotropic materials. For linear non-isotropic materials, becomes a matrix; even more generally, may be replaced by a tensor, which may depend upon the electric field itself, or may exhibit frequency dependence (hence dispersion).
For a linear isotropic dielectric, the polarization is given by:
where is known as the susceptibility of the dielectric to electric fields. Note that
Necessity
Some implications of the displacement current follow, which agree with experimental observation, and with the requirements of logical consistency for the theory of electromagnetism.
Generalizing Ampère's circuital law
Current in capacitors
An example illustrating the need for the displacement current arises in connection with capacitors with no medium between the plates. Consider the charging capacitor in the figure. The capacitor is in a circuit that causes equal and opposite charges to appear on the left plate and the right plate, charging the capacitor and increasing the electric field between its plates. No actual charge is transported through the vacuum between its plates. Nonetheless, a magnetic field exists between the plates as though a current were present there as well. One explanation is that a displacement current "flows" in the vacuum, and this current produces the magnetic field in the region between the plates according to Ampère's law:
where
is the closed line integral around some closed curve ;
is the magnetic field measured in teslas;
is the vector dot product;
is an infinitesimal vector line element along the curve , that is, a vector with magnitude equal to the length element of , and direction given by the tangent to the curve ;
is the magnetic constant, also called the permeability of free space; and
is the net displacement current that passes through a small surface bounded by the curve .
The magnetic field between the plates is the same as that outside the plates, so the displacement current must be the same as the conduction current in the wires, that is,
which extends the notion of current beyond a mere transport of charge.
Next, this displacement current is related to the charging of the capacitor. Consider the current in the imaginary cylindrical surface shown surrounding the left plate. A current, say , passes outward through the left surface of the cylinder, but no conduction current (no transport of real charges) crosses the right surface . Notice that the electric field between the plates increases as the capacitor charges. That is, in a manner described by Gauss's law, assuming no dielectric between the plates:
where refers to the imaginary cylindrical surface. Assuming a parallel plate capacitor with uniform electric field, and neglecting fringing effects around the edges of the plates, according to charge conservation equation
where the first term has a negative sign because charge leaves surface (the charge is decreasing), the last term has a positive sign because unit vector of surface is from left to right while the direction of electric field is from right to left, is the area of the surface . The electric field at surface is zero because surface is in the outside of the capacitor. Under the assumption of a uniform electric field distribution inside the capacitor, the displacement current density D is found by dividing by the area of the surface:
where is the current leaving the cylindrical surface (which must equal D) and D is the flow of charge per unit area into the cylindrical surface through the face .
Combining these results, the magnetic field is found using the integral form of Ampère's law with an arbitrary choice of contour provided the displacement current density term is added to the conduction current density (the Ampère-Maxwell equation):
This equation says that the integral of the magnetic field around the edge of a surface is equal to the integrated current through any surface with the same edge, plus the displacement current term through whichever surface.
As depicted in the figure to the right, the current crossing surface is entirely conduction current. Applying the Ampère-Maxwell equation to surface yields:
However, the current crossing surface is entirely displacement current. Applying this law to surface , which is bounded by exactly the same curve , but lies between the plates, produces:
Any surface that intersects the wire has current passing through it so Ampère's law gives the correct magnetic field. However a second surface bounded by the same edge could be drawn passing between the capacitor plates, therefore having no current passing through it. Without the displacement current term Ampere's law would give zero magnetic field for this surface. Therefore, without the displacement current term Ampere's law gives inconsistent results, the magnetic field would depend on the surface chosen for integration. Thus the displacement current term is necessary as a second source term which gives the correct magnetic field when the surface of integration passes between the capacitor plates. Because the current is increasing the charge on the capacitor's plates, the electric field between the plates is increasing, and the rate of change of electric field gives the correct value for the field found above.
Mathematical formulation
In a more mathematical vein, the same results can be obtained from the underlying differential equations. Consider for simplicity a non-magnetic medium where the relative magnetic permeability is unity, and the complication of magnetization current (bound current) is absent, so that and
The current leaving a volume must equal the rate of decrease of charge in a volume. In differential form this continuity equation becomes:
where the left side is the divergence of the free current density and the right side is the rate of decrease of the free charge density. However, Ampère's law in its original form states:
which implies that the divergence of the current term vanishes, contradicting the continuity equation. (Vanishing of the divergence is a result of the mathematical identity that states the divergence of a curl is always zero.) This conflict is removed by addition of the displacement current, as then:
and
which is in agreement with the continuity equation because of Gauss's law:
Wave propagation
The added displacement current also leads to wave propagation by taking the curl of the equation for magnetic field.
Substituting this form for into Ampère's law, and assuming there is no bound or free current density contributing to :
with the result:
However,
leading to the wave equation:
where use is made of the vector identity that holds for any vector field :
and the fact that the divergence of the magnetic field is zero. An identical wave equation can be found for the electric field by taking the curl:
If , , and are zero, the result is:
The electric field can be expressed in the general form:
where is the electric potential (which can be chosen to satisfy Poisson's equation) and is a vector potential (i.e. magnetic vector potential, not to be confused with surface area, as is denoted elsewhere). The component on the right hand side is the Gauss's law component, and this is the component that is relevant to the conservation of charge argument above. The second term on the right-hand side is the one relevant to the electromagnetic wave equation, because it is the term that contributes to the curl of . Because of the vector identity that says the curl of a gradient is zero, does not contribute to .
History and interpretation
Maxwell's displacement current was postulated in part III of his 1861 paper ''. Few topics in modern physics have caused as much confusion and misunderstanding as that of displacement current. This is in part due to the fact that Maxwell used a sea of molecular vortices in his derivation, while modern textbooks operate on the basis that displacement current can exist in free space. Maxwell's derivation is unrelated to the modern day derivation for displacement current in the vacuum, which is based on consistency between Ampère's circuital law for the magnetic field and the continuity equation for electric charge.
Maxwell's purpose is stated by him at (Part I, p. 161):
He is careful to point out the treatment is one of analogy:
In part III, in relation to displacement current, he says
Clearly Maxwell was driving at magnetization even though the same introduction clearly talks about dielectric polarization.
Maxwell compared the speed of electricity measured by Wilhelm Eduard Weber and Rudolf Kohlrausch (193,088 miles/second) and the speed of light determined by the Fizeau experiment (195,647 miles/second). Based on their same speed, he concluded that "light consists of transverse undulations in the same medium that is the cause of electric and magnetic phenomena."
But although the above quotations point towards a magnetic explanation for displacement current, for example, based upon the divergence of the above curl equation, Maxwell's explanation ultimately stressed linear polarization of dielectrics:
With some change of symbols (and units) combined with the results deduced in the section (, , and the material constant these equations take the familiar form between a parallel plate capacitor with uniform electric field, and neglecting fringing effects around the edges of the plates:
When it came to deriving the electromagnetic wave equation from displacement current in his 1865 paper 'A Dynamical Theory of the Electromagnetic Field', he got around the problem of the non-zero divergence associated with Gauss's law and dielectric displacement by eliminating the Gauss term and deriving the wave equation exclusively for the solenoidal magnetic field vector.
Maxwell's emphasis on polarization diverted attention towards the electric capacitor circuit, and led to the common belief that Maxwell conceived of displacement current so as to maintain conservation of charge in an electric capacitor circuit. There are a variety of debatable notions about Maxwell's thinking, ranging from his supposed desire to perfect the symmetry of the field equations to the desire to achieve compatibility with the continuity equation.
See also
Electromagnetic wave equation
Ampère's circuital law
Capacitance
References
Maxwell's papers
On Faraday's Lines of Force Maxwell's paper of 1855
Maxwell's paper of 1861
Maxwell's paper of 1864
Further reading
AM Bork Maxwell, Displacement Current, and Symmetry (1963)
AM Bork Maxwell and the Electromagnetic Wave Equation (1967)
External links
Electric current
Electricity concepts
Electrodynamics
Electromagnetism | Displacement current | [
"Physics",
"Mathematics"
] | 2,761 | [
"Electromagnetism",
"Physical phenomena",
"Physical quantities",
"Fundamental interactions",
"Electrodynamics",
"Electric current",
"Wikipedia categories named after physical quantities",
"Dynamical systems"
] |
555,750 | https://en.wikipedia.org/wiki/Permeable%20paving | Permeable paving surfaces are made of either a porous material that enables stormwater to flow through it or nonporous blocks spaced so that water can flow between the gaps. Permeable paving can also include a variety of surfacing techniques for roads, parking lots, and pedestrian walkways. Permeable pavement surfaces may be composed of; pervious concrete, porous asphalt, paving stones, or interlocking pavers. Unlike traditional impervious paving materials such as concrete and asphalt, permeable paving systems allow stormwater to percolate and infiltrate through the pavement and into the aggregate layers and/or soil below. In addition to reducing surface runoff, permeable paving systems can trap suspended solids, thereby filtering pollutants from stormwater.
Permeable pavement is commonly used on roads, paths and parking lots subject to light vehicular traffic, such as cycle-paths, service or emergency access lanes, road and airport shoulders, and residential sidewalks and driveways.
Description and applications
Permeable solutions can be based on porous asphalt and concrete surfaces, concrete pavers (permeable interlocking concrete paving systems – PICP), or polymer-based grass pavers, grids and geocells. Porous pavements such as pervious concrete and pervious asphalt are better suited for urbanized areas that see more frequent vehicular traffic, while concrete pavers, grids, and geocells are better suited for light vehicular traffic, pedestrian and cycling pathways, and overflow parking lots. Pervious concrete pavers allow water to percolate and infiltrate through the pavers and into the aggregate layers and/or soil below. Impervious concrete pavers installed with ample void space between each paver function in the same way as pervious concrete pavers as they enable stormwater to drain into the voids between each paver, either filled with coarse aggregate or vegetation, to a stone and/or soil base layer for on-site infiltration and filtering. Polymer based grass grid or cellular paver systems provide load bearing reinforcement for unpaved surfaces of gravel or turf.
Grass pavers, plastic turf reinforcing grids (PTRG), and geocells (cellular confinement systems) are honeycombed 3D grid-cellular systems, made of thin-walled HDPE plastic or other polymer alloys. These provide grass reinforcement, ground stabilization and gravel retention. The 3D structure reinforces infill and transfers vertical loads from the surface, distributing them over a wider area. Selection of the type of cellular grid depends to an extent on the surface material, traffic and loads. The cellular grids are installed on a prepared base layer of open-graded stone (higher void spacing) or engineered stone (stronger). The surface layer may be compacted gravel or topsoil seeded with grass and fertilizer. In addition to load support, the cellular grid reduces compaction of the soil to maintain permeability, while the roots improve permeability due to their root channels.
In new suburban growth, porous pavements protect watersheds by delaying and filtering the surge flow. In existing built-up areas and towns, redevelopment and reconstruction are opportunities to implement stormwater water management practices. Permeable paving is an important component in Low Impact Development (LID), a process for land development in the United States that attempts to minimize impacts on water quality and the similar concept of sustainable drainage systems (SuDS) in the United Kingdom.
The infiltration capacity of the native soil is a key design consideration for determining the depth of base rock for stormwater storage or for whether an underdrain system is needed.
Advantages
Managing runoff
Permeable paving surfaces have been demonstrated as effective in managing runoff from paved surfaces and recharging groundwater aquifers. Large volumes of urban runoff causes serious erosion and siltation in surface water bodies. Permeable pavers provide a solid ground surface, strong enough to take heavy loads, like large vehicles, while at the same time they allow water to filter through the surface and reach the underlying soils, mimicking natural ground absorption. They can reduce downstream flooding and stream bank erosion, and maintain base flows in rivers to keep ecosystems self-sustaining. Permeable pavers also combat erosion that occurs when grass is dry or dead, by replacing grassed areas in suburban and residential environments. The goal is to control stormwater at the source, reduce runoff and improve water quality by filtering pollutants in the subsurface layers.
Controlling pollutants
To control pollutants found in surface runoff, permeable paving surfaces capture the stormwater in the soil or aggregate base below the road or pathway, and subsequently treat the runoff via percolation, which allows water to infiltrate, supporting groundwater recharge or contain the stormwater to be released back into municipal stormwater management systems after a storm. Permeable paving systems have shown effective in reducing suspended solids, Biochemical Oxygen Demand (BOD), chemical oxygen demand, and ammonium concentrations within groundwater. In areas where infiltration is not possible due to unsuitable soil conditions, permeable pavements are used in the attenuation mode where water is retained in the pavement and slowly released to surface water systems between storm events.
Trees
Permeable pavements may give urban trees the rooting space they need to grow to full size. A "structural-soil" pavement base combines structural aggregate with soil; a porous surface admits vital air and water to the rooting zone. This integrates healthy ecology and thriving cities, with the living tree canopy above, the city's traffic on the ground, and living tree roots below. The benefits of permeables on urban tree growth have not been conclusively demonstrated and many researchers have observed tree growth is not increased if construction practices compact materials before permeable pavements are installed.
Reducing heat island effect
Research findings indicate that employing high albedo (reflective) and permeable pavement has the potential to alleviate near-surface heat island effects and enhance air quality, while also potentially improving human thermal comfort. In comparison to impermeable pavement, permeable pavement exhibits minimal thermal impact on the near-surface air due to its capacity for heat exchange.
Disadvantages
Runoff volumes
Permeable pavements are designed to replace Effective Impervious Areas (EIAs), but can be used, in some cases, to manage stormwater from other impervious surfaces on site. Use of this technique must be part of an overall on site management system for stormwater, and is not a replacement for other techniques.
During large storm events, the water table below the porous pavement can rise to a higher level, preventing the precipitation from being absorbed into the ground. Some additional water is stored in the open graded or crushed drain rock base, and remains until the subgrade can absorb the water. For clay-based soils, or other low to 'non'-draining soils, it is important to increase the depth of the crushed drain rock base to allow additional capacity for the water as it waits to be infiltrated.
Pollutant load
Runoff across some land uses may become contaminated, where pollutant concentrations exceed those typically found in stormwater. These "hot spots" include commercial plant nurseries, recycling facilities, fueling stations, industrial storage, marinas, some outdoor loading facilities, public works yards, hazardous materials generators (if containers are exposed to rainfall), vehicle service, washing, and maintenance areas, and steam cleaning facilities. Since porous pavement is an infiltration practice, it should not be applied at stormwater hot spots due to the potential for groundwater contamination. All contaminated runoff should be prevented from entering municipal storm drain systems by using best management practices (BMPs) for the specific industry or activity.
Weight and traffic volumes
Reference sources differ on whether low or medium traffic volumes and weights are appropriate for porous pavements due to the variety of physical properties of each system. For example, around truck loading docks and areas of high commercial traffic, porous pavement is sometimes cited as being inappropriate. However, given the variability of products available, the growing number of existing installations in North America and targeted research by both manufacturers and user agencies, the range of accepted applications seems to be expanding. Some concrete paver companies have developed products specifically for industrial applications. Working examples exist at fire halls, busy retail complex parking lots, and on public and private roads, including intersections in parts of North America with quite severe winter conditions.
Siting
Permeable pavements may not be appropriate when land surrounding or draining into the pavement exceeds a 20 percent slope, where pavement is down slope from buildings or where foundations have piped drainage at their footers. The key is to ensure that drainage from other parts of a site is intercepted and dealt with separately rather than being directed onto permeable surfaces.
Climate
Cold climates may present special challenges. Road salt contains chlorides that could migrate through the porous pavement into groundwater. Snow plow blades could catch block edges of concrete pavers or other block installations, damaging surfaces and creating potholes. Sand cannot be used for snow and ice control on porous surfaces because it will plug the pores and reduce permeability. Although there are design modifications to reduce the risks, infiltrating runoff may freeze below the pavement, causing frost heave. Another issue is spalling damage, which exclusively occurs on porous concrete pavement from salt application during the winter season. Thus porous paving is suggested for warmer climates. However, other materials have proven to be effective, even lowering winter maintenance costs by preserving salt in the pavement itself. This also reduces the amount of storm water runoff that is contaminated with salt chlorides. Pervious concrete and asphalt designed to reduce frost heave and spalling damage has been used successfully in Norway and New Hampshire. Furthermore, experience suggests that preventive measures with rapid drainage below porous surfaces be taken in order to increase the rate of snow melt above ground.
Cost
It can be difficult to compare cost impacts between conventional impervious surfaces and permeable surfaces given the variables such as lifespan, geographic location, type of permeable paving system and site specific factors. Some estimates put the cost of permeable paving at about one third more expensive than that of conventional impervious paving. Using permeable paving, however, can reduce the cost of providing larger or more stormwater BMPs on site, and these savings should be factored into any cost analysis. In addition, the off-site environmental impact costs of not reducing on-site stormwater volumes and pollution have historically been ignored or assigned to other groups (local government parks, public works and environmental restoration budgets, fisheries losses, etc.). Permeable paving systems, specifically pervious concrete pavers, have shown significant cost benefits after a Life Cycle Assessment was performed, as the reduction in total weight of material needed for each unit is reduced by nature of the porous design.
Longevity and maintenance
Permeable paving systems, especially those with porous surfaces, require maintenance in order to keep the pores clear of fine aggregates as to not hinder the systems ability to infiltrate stormwater. The frequency of cleaning is again dependent on many site specific factors, such as runoff volume, neighboring sites and climate. Often, cleaning of permeable paving systems is done by suction excavators, which are alternatively used for excavation in sensitive areas and therefore are becoming increasingly common. If maintenance is not carried out on a regular basis, the porous pavements can begin to function more like impervious surfaces. With more advanced paving systems the levels of maintenance needed can be greatly decreased, elastomerically bound glass pavements requires less maintenance than regular concrete paving as the glass bound pavement has 50% more void space.
Plastic grid systems, if selected and installed correctly, are becoming more and more popular with local government maintenance personnel owing to the reduction in maintenance efforts: reduced gravel migration and weed suppression in public park settings.
Some permeable paving products are prone to damage from misuse, such as drivers who tear up patches of plastic & gravel grid systems by "joy riding" on remote parking lots at night. The damage is not difficult to repair but can look unsightly in the meantime. Grass pavers require supplemental watering in the first year to establish the vegetation, otherwise they may need to be re-seeded. Regional climate also means that most grass applications will go dormant during the dry season. While brown vegetation is only a matter of aesthetics, it can influence public support for this type of permeable paving.
Traditional permeable concrete paving bricks tend to lose their color in relatively short time which can be costly to replace or clean and is mainly due to the problem of efflorescence.
Types of permeable pavement
Installation of porous pavements is no more difficult than that of dense pavements, but has different specifications and procedures which must be strictly adhered to. Nine different families of porous paving materials present distinctive advantages and disadvantages for specific applications. Here are examples:
Pervious concrete
Pervious concrete is widely available, can bear frequent traffic, and is universally accessible. Pervious concrete quality depends on the installer's knowledge and experience.
Plastic grids
Plastic grids allow for a 100% porous system using structural grid systems for containing and stabilizing either gravel or turf. These grids come in a variety of shapes and sizes depending on use; from pathways to commercial parking lots. These systems have been used readily in Europe for over a decade, but are gaining popularity in North America due to requirements by government for many projects to meet LEED environmental building standards. Plastic grid systems are also popular with homeowners due to their lower cost to install, ease of installation, and versatility. The ideal design for this type of grid system is a closed cell system, which prevents gravel/sand/turf from migrating laterally.
Porous asphalt
Porous asphalt is produced and placed using the same methods as conventional asphalt concrete; it differs in that fine (small) aggregates are omitted from the asphalt mixture. The remaining large, single-sized aggregate particles leave open voids that give the material its porosity and permeability. To ensure pavement strength, fiber may be added to the mix or a polymer-modified asphalt binder may be used. Generally, porous asphalt pavements are designed with a subsurface reservoir that holds water that passes through the pavement, allowing it to evaporate and/or percolate slowly into the surround soils.
Open-graded friction courses (OGFC) are a porous asphalt surface course used on highways to improve driving safety by removing water from the surface. These use an open-graded mix design for the top layer of asphalt. Unlike a full-depth porous asphalt pavement, OGFCs do not drain water to the base of a pavement. Instead, they allow water to infiltrate the top 3/4 to 1.5 inch of the pavement and then drain out to the side of the roadway. This can improve the friction characteristics of the road and reduce road spray.
Single-sized aggregate
Single-sized aggregate without any binder, e.g. loose gravel, stone-chippings, is another alternative. Although it can only be safely used in walkways and very low-speed, low-traffic settings, e.g. car-parks and drives, its potential cumulative area is great.
Porous turf
Porous turf, if properly constructed, can be used for occasional parking like that at churches and stadia. Plastic turf reinforcing grids can be used to support the increased load. Living turf transpires water, actively counteracting the "heat island" with what appears to be a green open lawn.
Permeable interlocking concrete pavements
Permeable interlocking concrete pavements are concrete units with open, permeable spaces between the units. More recently manufacturers have introduced styles with smaller joint allowing for better ADA compliance and still capturing a significant amount of stormwater. They give an architectural appearance, and can bear both light and heavy traffic, particularly interlocking concrete pavers, excepting high-volume or high-speed roads. Some products are polymer-coated and have an entirely porous face.
Permeable clay brick pavements
Permeable clay brick pavements are fired clay brick units with open, permeable spaces between the units. Clay pavers provide a durable surface that allows stormwater runoff to permeate through the joints.
Resin-bound paving
Resin bound paving is a mixture of resin binder and aggregate. Clear resin is used to fully coat each aggregate particle before laying. Enough resin is used to allow each aggregate particle to adhere to one another and to the base yet leave voids for water to permeate through. Resin bound paving provides a strong and durable surface that is suitable for pedestrian and vehicular traffic in applications such as pathways, driveways, car parks and access roads.
Stabilized decomposed granite
Stabilized decomposed granite is a mixture of a non-resin binder and aggregate (decomposed granite). The binder, which may include color, is mixed with the decomposed granite and the mixture is moistened either before it is put in place or after. Stabilized decomposed granite provides a strong and durable surface that is suitable for pedestrian and vehicular traffic in applications such as pathways, driveways, car parks and access roads. The surface is ADA compliant and can be painted on..
Bound recycled glass porous pavement
Elastomerically bound recycled glass porous pavement consisting of bonding processed post-consumer glass with a mixture of resins, pigments, granite and binding agents. Approximately 75 percent of glass in the U.S. is disposed in landfills.
Wood permeable pavement
Wood permeable pavement is a natural and sustainable building material. Architects and landscape designers turning towards permeable pavers will find that some types of highly durable hardwoods (e.g. Black Locust) are an effective permeable pavers material. Wood paver blocks made of Black Locust provide a highly permeable, durable surface that will last for decades because of the characteristics of the wood. Black Locust Lumber wood pavers exceed 10.180 PSI (pounds per square inch) and have a Janka Hardness 1,700 lbf. They are suitable for pedestrian and vehicular traffic in the form of pathways and driveways and are placed upon permeable foundations.
See also
Stormwater management practices related to roadways:
Bioretention
Bioswale
Hoggin
Other related pages
Pavement engineering
Blue roof
Notes
References
National Conference on Sustainable Drainage (UK)
NOVATECH – International Conference On Sustainable Techniques And Strategies In Urban Water Management
U.S. Federal Highway Administration. Turner-Fairbank Highway Research Center. McLean, VA. "Waste Glass." Recycled Materials in the Highway Environment. Accessed 2010-07-05.
External links
Sustainable Drainage: A Review of Published Material on the Performance of Various SUDS Components – Construction Industry Research & Information Assn. (UK)
Permeable Paving & SuDS - Interpave, The Precast Concrete Paving and Kerb Association (UK)
Technical Note 14D – Permeable Clay Brick Pavements – Brick Industry Association (US)
Sustainable Technologies Evaluation Program Low Impact Development Planning and Design Guide (Ontario, Canada)
Permeable Stabilised Gravel Surfaces. SuDS Compliant. - Nidagravel UK | Gravel Stabilisers UK Ltd (UK)
Pavements
Building materials
Environmental engineering
Hydrology and urban planning
Water conservation
Sustainable products
Paving
Paving | Permeable paving | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,992 | [
"Hydrology",
"Building engineering",
"Chemical engineering",
"Architecture",
"Construction",
"Materials",
"Civil engineering",
"Hydrology and urban planning",
"Environmental engineering",
"Matter",
"Building materials"
] |
555,768 | https://en.wikipedia.org/wiki/Mean-field%20theory | In physics and probability theory, Mean-field theory (MFT) or Self-consistent field theory studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other.
The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost.
MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium.
Origins
The idea first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Curie-Weiss law for magnetic susceptibility, Flory–Huggins solution theory, and Scheutjens–Fleer theory.
Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original problem to be solvable and open to calculation, and in some cases MFT may give very accurate approximations.
In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means that an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean-field”.
Quite often, MFT provides a convenient launch point for studying higher-order fluctuations. For example, when computing the partition function, studying the combinatorics of the interaction terms in the Hamiltonian can sometimes at best produce perturbation results or Feynman diagrams that correct the mean-field approximation.
Validity
In general, dimensionality plays an active role in determining whether a mean-field approach will work for any particular problem. There is sometimes a critical dimension above which MFT is valid and below which it is not.
Heuristically, many interactions are replaced in MFT by one effective interaction. So if the field or particle exhibits many random interactions in the original system, they tend to cancel each other out, so the mean effective interaction and MFT will be more accurate. This is true in cases of high dimensionality, when the Hamiltonian includes long-range forces, or when the particles are extended (e.g. polymers). The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, often depending upon the number of spatial dimensions in the system of interest.
Formal approach (Hamiltonian)
The formal basis for mean-field theory is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian
has the following upper bound:
where is the entropy, and and are Helmholtz free energies. The average is taken over the equilibrium ensemble of the reference system with Hamiltonian . In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as
where are the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth), one can consider sharpening the upper bound by minimising the right side of the inequality. The minimising reference system is then the "best" approximation to the true system using non-correlated degrees of freedom and is known as the mean field approximation.
For the most common case that the target Hamiltonian contains only pairwise interactions, i.e.,
where is the set of pairs that interact, the minimising procedure can be carried out formally. Define as the generalized sum of the observable over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by
where is the probability to find the reference system in the state specified by the variables . This probability is given by the normalized Boltzmann factor
where is the partition function. Thus
In order to minimise, we take the derivative with respect to the single-degree-of-freedom probabilities using a Lagrange multiplier to ensure proper normalization. The end result is the set of self-consistency equations
where the mean field is given by
Applications
Mean field theory can be applied to a number of physical systems so as to study phenomena such as phase transitions.
Ising model
Formal derivation
The Bogoliubov inequality, shown above, can be used to find the dynamics of a mean field model of the two-dimensional Ising lattice. A magnetisation function can be calculated from the resultant approximate free energy. The first step is choosing a more tractable approximation of the true Hamiltonian. Using a non-interacting or effective field Hamiltonian,
,
the variational free energy is
By the Bogoliubov inequality, simplifying this quantity and calculating the magnetisation function that minimises the variational free energy yields the best approximation to the actual magnetisation. The minimiser is
which is the ensemble average of spin. This simplifies to
Equating the effective field felt by all spins to a mean spin value relates the variational approach to the suppression of fluctuations. The physical interpretation of the magnetisation function is then a field of mean values for individual spins.
Non-interacting spins approximation
Consider the Ising model on a -dimensional lattice. The Hamiltonian is given by
where the indicates summation over the pair of nearest neighbors , and are neighboring Ising spins.
Let us transform our spin variable by introducing the fluctuation from its mean value . We may rewrite the Hamiltonian as
where we define ; this is the fluctuation of the spin.
If we expand the right side, we obtain one term that is entirely dependent on the mean values of the spins and independent of the spin configurations. This is the trivial term, which does not affect the statistical properties of the system. The next term is the one involving the product of the mean value of the spin and the fluctuation value. Finally, the last term involves a product of two fluctuation values.
The mean field approximation consists of neglecting this second-order fluctuation term:
These fluctuations are enhanced at low dimensions, making MFT a better approximation for high dimensions.
Again, the summand can be re-expanded. In addition, we expect that the mean value of each spin is site-independent, since the Ising chain is translationally invariant. This yields
The summation over neighboring spins can be rewritten as , where means "nearest neighbor of ", and the prefactor avoids double counting, since each bond participates in two spins. Simplifying leads to the final expression
where is the coordination number. At this point, the Ising Hamiltonian has been decoupled into a sum of one-body Hamiltonians with an effective mean field , which is the sum of the external field and of the mean field induced by the neighboring spins. It is worth noting that this mean field directly depends on the number of nearest neighbors and thus on the dimension of the system (for instance, for a hypercubic lattice of dimension , ).
Substituting this Hamiltonian into the partition function and solving the effective 1D problem, we obtain
where is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system and calculate critical exponents. In particular, we can obtain the magnetization as a function of .
We thus have two equations between and , allowing us to determine as a function of temperature. This leads to the following observation:
For temperatures greater than a certain value , the only solution is . The system is paramagnetic.
For , there are two non-zero solutions: . The system is ferromagnetic.
is given by the following relation: .
This shows that MFT can account for the ferromagnetic phase transition.
Application to other systems
Similarly, MFT can be applied to other types of Hamiltonian as in the following cases:
To study the metal–superconductor transition. In this case, the analog of the magnetization is the superconducting gap .
The molecular field of a liquid crystal that emerges when the Laplacian of the director field is non-zero.
To determine the optimal amino acid side chain packing given a fixed protein backbone in protein structure prediction (see Self-consistent mean field (biology)).
To determine the elastic properties of a composite material.
Variationally minimisation like mean field theory can be also be used in statistical inference.
Extension to time-dependent mean fields
In mean field theory, the mean field appearing in the single-site problem is a time-independent scalar or vector quantity. However, this isn't always the case: in a variant of mean field theory called dynamical mean field theory (DMFT), the mean field becomes a time-dependent quantity. For instance, DMFT can be applied to the Hubbard model to study the metal–Mott-insulator transition.
See also
Dynamical mean field theory
Mean field game theory
References
Statistical mechanics
Concepts in physics
Electronic structure methods | Mean-field theory | [
"Physics",
"Chemistry"
] | 2,035 | [
"Quantum chemistry",
"Quantum mechanics",
"Computational physics",
"Electronic structure methods",
"Computational chemistry",
"nan",
"Statistical mechanics"
] |
556,263 | https://en.wikipedia.org/wiki/Copper%28II%29%20nitrate | Copper(II) nitrate describes any member of the family of inorganic compounds with the formula Cu(NO3)2(H2O)x. The hydrates are hygroscopic blue solids. Anhydrous copper nitrate forms blue-green crystals and sublimes in a vacuum at 150-200 °C. Common hydrates are the hemipentahydrate and trihydrate.
Synthesis and reactions
Hydrated copper(II) nitrate
Hydrated copper nitrate is prepared by treating copper metal or its oxide with nitric acid:
The same salts can be prepared treating copper metal with an aqueous solution of silver nitrate. That reaction illustrates the ability of copper metal to reduce silver ions.
In aqueous solution, the hydrates exist as the aqua complex . Such complexes are highly labile and subject to rapid ligand exchange due to the d9 electronic configuration of copper(II).
Attempted dehydration of any of the hydrated copper(II) nitrates by heating affords the oxides, not . At 80 °C the hydrates convert to "basic copper nitrate", , which converts to at 180 °C. Exploiting this reactivity, copper nitrate can be used to generate nitric acid by heating it until decomposition and passing the fumes directly into water. This method is similar to the last step in the Ostwald process. The equations are as follows:
Treatment of copper(II) nitrate solutions with triphenylphosphine, triphenylarsine, and triphenylstibine gives the corresponding copper(I) complexes (E = P, As, Sb; Ph = ). The group V ligand is oxidized to the oxide.
Anhydrous copper(II) nitrate
Anhydrous is one of the few anhydrous transition metal nitrates. It cannot be prepared by reactions containing or producing water. Instead, anhydrous forms when copper metal is treated with dinitrogen tetroxide:
Structure
Anhydrous copper(II) nitrate
Two polymorphs of anhydrous copper(II) nitrate, α and β, are known. Both polymorphs are three-dimensional coordination polymer networks with infinite chains of copper(II) centers and nitrate groups. The α form has only one Cu environment, with [4+1] coordination, but the β form has two different copper centers, one with [4+1] and one that is square planar.
The nitromethane solvate also features "[4+1] coordination", with four short Cu-O bonds of approximately 200 pm and one longer bond at 240 pm.
Heating solid anhydrous copper(II) nitrate under a vacuum to 150-200 °C leads to sublimation and "cracking" to give a vapour of monomeric copper(II) nitrate molecules. In the vapour phase, the molecule features two bidentate nitrate ligands.
Hydrated copper(II) nitrate
Five hydrates have been reported: the monohydrate (), the sesquihydrate (), the hemipentahydrate (), a trihydrate (), and a hexahydrate (. The crystal structure of the hexahydrate appeared to show six almost equal Cu–O distances, not revealing the usual effect of a Jahn-Teller distortion that is otherwise characteristic of octahedral Cu(II) complexes. This non-effect was attributed to the strong hydrogen bonding that limits the elasticity of the Cu-O bonds but it is probably due to nickel being misidentified as copper in the refinement.
Applications
Copper(II) nitrate finds a variety of applications, the main one being its conversion to copper(II) oxide, which is used as catalyst for a variety of processes in organic chemistry. Its solutions are used in textiles and polishing agents for other metals. Copper nitrates are found in some pyrotechnics. It is often used in school laboratories to demonstrate chemical voltaic cell reactions. It is a component in some ceramic glazes and metal patinas.
Organic synthesis
Copper nitrate, in combination with acetic anhydride, is an effective reagent for nitration of aromatic compounds, known as the Menke nitration.
Hydrated copper nitrate adsorbed onto clay affords a reagent called "Claycop". The resulting blue-colored clay is used as a slurry, for example for the oxidation of thiols to disulfides. Claycop is also used to convert dithioacetals to carbonyls. A related reagent based on montmorillonite has proven useful for the nitration of aromatic compounds.
Electrowinning
Copper(II) nitrate may also be used for copper electrowinning on small scale with a ammonia (NH3) as a byproduct.
Naturally occurring copper nitrates
No mineral of the ideal formula, or the hydrates, are known. Likasite, and buttgenbachite, are related minerals.
Natural basic copper nitrates include the rare minerals gerhardtite and rouaite, both being polymorphs of . A much more complex, basic, hydrated and chloride-bearing natural salt is buttgenbachite.
References
External links
National Pollutant Inventory – Copper and compounds fact sheet
ICSC Copper and compounds fact sheet
Copper(II) compounds
Nitrates
Pyrotechnic oxidizers
Pyrotechnic colorants
Oxidizing agents | Copper(II) nitrate | [
"Chemistry"
] | 1,137 | [
"Nitrates",
"Redox",
"Oxidizing agents",
"Salts"
] |
556,970 | https://en.wikipedia.org/wiki/Irradiance | In radiometry, irradiance is the radiant flux received by a surface per unit area. The SI unit of irradiance is the watt per square metre (symbol W⋅m−2 or W/m2). The CGS unit erg per square centimetre per second (erg⋅cm−2⋅s−1) is often used in astronomy. Irradiance is often called intensity, but this term is avoided in radiometry where such usage leads to confusion with radiant intensity. In astrophysics, irradiance is called radiant flux.
Spectral irradiance is the irradiance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The two forms have different dimensions and units: spectral irradiance of a frequency spectrum is measured in watts per square metre per hertz (W⋅m−2⋅Hz−1), while spectral irradiance of a wavelength spectrum is measured in watts per square metre per metre (W⋅m−3), or more commonly watts per square metre per nanometre (W⋅m−2⋅nm−1).
Mathematical definitions
Irradiance
Irradiance of a surface, denoted Ee ("e" for "energetic", to avoid confusion with photometric quantities), is defined as
where
∂ is the partial derivative symbol;
Φe is the radiant flux received;
A is the area.
The radiant flux emitted by a surface is called radiant exitance.
Spectral irradiance
Spectral irradiance in frequency of a surface, denoted Ee,ν, is defined as
where ν is the frequency.
Spectral irradiance in wavelength of a surface, denoted Ee,λ, is defined as
where λ is the wavelength.
Property
Irradiance of a surface is also, according to the definition of radiant flux, equal to the time-average of the component of the Poynting vector perpendicular to the surface:
where
is the time-average;
S is the Poynting vector;
α is the angle between a unit vector normal to the surface and S.
For a propagating sinusoidal linearly polarized electromagnetic plane wave, the Poynting vector always points to the direction of propagation while oscillating in magnitude. The irradiance of a surface is then given by
where
Em is the amplitude of the wave's electric field;
n is the refractive index of the medium of propagation;
c is the speed of light in vacuum;
μ0 is the vacuum permeability;
ε0 is the vacuum permittivity;
is the impedance of free space.
This formula assumes that the magnetic susceptibility is negligible; i.e. that μr ≈ 1 (μ ≈ μ0) where μr is the relative magnetic permeability of the propagation medium. This assumption is typically valid in transparent media in the optical frequency range.
Point source
A point source of light produces spherical wavefronts. The irradiance in this case varies inversely with the square of the distance from the source.
where
is the distance;
is the radiant flux;
is the surface area of a sphere of radius .
For quick approximations, this equation indicates that doubling the distance reduces irradiation to one quarter; or similarly, to double irradiation, reduce the distance to 71%.
In astronomy, stars are routinely treated as point sources even though they are much larger than the Earth. This is a good approximation because the distance from even a nearby star to the Earth is much larger than the star's diameter. For instance, the irradiance of Alpha Centauri A (radiant flux: 1.5 L☉, distance: 4.34 ly) is about 2.7 × 10−8 W/m2 on Earth.
Solar irradiance
The global irradiance on a horizontal surface on Earth consists of the direct irradiance Ee,dir and diffuse irradiance Ee,diff. On a tilted plane, there is another irradiance component, Ee,refl, which is the component that is reflected from the ground. The average ground reflection is about 20% of the global irradiance. Hence, the irradiance Ee on a tilted plane consists of three components:
The integral of solar irradiance over a time period is called "solar exposure" or "insolation".
Average solar irradiance at the top of the Earth's atmosphere is roughly 1361 W/m2, but at surface irradiance is approximately 1000 W/m2 on a clear day.
SI radiometry units
See also
Albedo
Fluence
Illuminance
Insolation
Light diffusion
PI curve (photosynthesis-irradiance curve)
Solar azimuth angle
Solar irradiance
Solar noon
Spectral flux density
Stefan–Boltzmann law
References
Physical quantities
Radiometry | Irradiance | [
"Physics",
"Mathematics",
"Engineering"
] | 987 | [
"Physical phenomena",
"Telecommunications engineering",
"Physical quantities",
"Quantity",
"Physical properties",
"Radiometry"
] |
35,658,273 | https://en.wikipedia.org/wiki/Weber%20electrodynamics | Weber electrodynamics is a theory of electromagnetism that preceded Maxwell electrodynamics and was replaced by it by the end of the 19th century. Weber electrodynamics is mainly based on the contributions of André-Marie Ampère, Carl Friedrich Gauss and Wilhelm Eduard Weber. In this theory, Coulomb's law becomes velocity and acceleration dependent. Weber electrodynamics is only applicable for electrostatics, magnetostatics and for the quasistatic approximation. Weber electrodynamics is not suitable for describing electromagnetic waves and for calculating the forces between electrically charged particles that move very rapidly or that are accelerated more than insignificantly.
The outstanding feature of Weber electrodynamics is that it makes it possible to describe magnetic forces between direct currents, low-frequency alternating currents, and permanent magnets without a magnetic field.
History
Around 1820, André-Marie Ampère carried out numerous systematic experiments with direct currents. Eventually in 1823 he developed the force law
which can be used to calculate the force that a current element exerts on another current element . Here, is the vector that points from the current element to the current element . A current element should be interpreted as a very short segment of the length of a conductor with a direct current flowing in the direction of .
In 1835, Carl Friedrich Gauss realized that Ampère's force law can be interpreted by a minor generalization of Coulomb's law. He postulated that the electric force exerted by a point charge on another point charge depends not only on the distance , but also on the relative velocity :
Importantly, Gauss's force law is a significant generalization of Ampere's force law, since moving point charges do not represent direct currents. In fact, today Ampere's force law is no longer presented in its original form, as there are equivalent representations for direct currents such as the Biot-Savart law in combination with the Lorentz force. This is the point at which Weber electrodynamics and Maxwell electrodynamics take different paths, because James Clerk Maxwell decided to base his theory on the Biot-Savart law, which was originally also only valid for closed conductor loops.
Wilhelm Eduard Weber's contribution to Weber electrodynamics was that he extended Gauss's force formula in such a way that it was possible to provide a formula for the potential energy. He presented his formula in 1848 which reads
with being the radial velocity. Weber also carried out numerous experiments and documented the state of knowledge at this time in his substantial work.
Weber electrodynamics and Gauss's hypothesis fell gradually into oblivion after the introduction of the displacement current around 1870, since the full set of Maxwell equations made it possible to describe electromagnetic waves for the first time.
From around 1880, experiments such as the Michelson-Morley experiment showed that electromagnetic waves propagate at the speed of light regardless of the state of motion of the transmitter or receiver in a vacuum, which is not consistent with the predictions of Maxwell's equations, since these describe wave propagation in a medium. To overcome this problem, the Lorentz transformation was developed. As a result, Gauss's hypothesis that the electric force depends on the relative velocity was added back in a modified form.
Mathematical description
Weber force
In Weber electrodynamics, the electromagnetic force that a point charge with trajectory exerts on another point charge with trajectory at time is given by equation
Here, is the displacement of relative to and is the distance. Note that
is the radial velocity and
is the radial acceleration. If one substitutes this into the Weber force (), one obtains with and the alternative representation
For one obtains equation () as postulated by Gauss in 1835.
Link between potential energy and force
That Weber's potential energy () is compatible with force formula () can be shown by means of equation () and equation
which follows directly from Newton's laws of motion.
Conservation of energy, momentum and angular momentum
In Weber electrodynamics, energy, momentum and angular momentum are conserved quantities. The conservation of momentum results from the property of the Weber force to comply with Newton's third law: If one exchanges source and receiver of the force, only the sign of the force is altered. The conservation of angular momentum is a consequence of the fact that the Weber force is a central force.
The conservation of energy in an isolated system consisting of only two particles is easy to demonstrate. Equation () gives . This leads to
The derivative of the Weber potential () with respect to time is
A comparison of the two equations shows that equals . Applying Newton's second law gives . Except for the sign, the right-hand side corresponds to the time derivative of the kinetic energy. This means that every change of the potential energy is compensated exactly by a change of the kinetic energy. Consequently, the total energy, i.e. the sum of potential energy and kinetic energy, must be a conserved quantity.
Comparison with Maxwell electrodynamics
Lorentz force
Maxwell electrodynamics and Weber electrodynamics are equivalent for direct currents and non-relativistic speeds, since direct currents can only flow in closed conductor loops. As Maxwell already demonstrated around 150 years ago, under these conditions the Ampere force law can be represented in several variations.
Maxwell's electrodynamics follows a two-stage approach, firstly by assigning a magnetic field to each current element and secondly by defining that the force on a test charge moving at the speed can be calculated using the expression . In Maxwell's time, the velocity was interpreted as the velocity of the test charge relative to the medium in which the magnetic field propagates. In Maxwell's electrodynamics, the Lorentz force is a physical law that cannot be traced back to a cause or mechanism.
Weber electrodynamics, on the other hand, does not define a magnetic field or a Lorentz force, but interprets the force of a current on a test charge by postulating that a current-carrying conductor contains negative and positive point charges that move at slightly different relative velocities with respect to the test charge. This in turn produces slight deformations of the force so that, depending on the speed of the test charge, residual forces remain. In sum, these correspond exactly to the Lorentz force.
This means that Weber electrodynamics explains the Lorentz force by means of the principle of relativity, albeit only for relative velocities that are much smaller than the speed of light. Gauss's hypothesis of 1835 therefore already represents an early interpretation of magnetism as a relativistic effect. This interpretation is not included in Maxwell's electrodynamics.
Electromagnetic waves
For alternating currents and point charges, the different representations of Ampere's force law are not equivalent. Maxwell was familiar with Weber's electrodynamics and mentioned it positively. Nevertheless, he decided to build his theory on the Biot-Savart law by generalizing it to cases where the conductor loops contain discontinuities. The significance of the displacement current becomes clear by studying the field of the electromagnetic force that an accelerated electron would generate on a resting test charge. The figures show the field of an electron that is accelerated to 75 percent of the speed of light within 3 nanoseconds.
In the case of the Weber force, it can be recognized that the initially radial field becomes flattened in the direction of motion. This represents an effect that is presently associated with the Lorentz contraction. Something similar can also be seen in the field calculated by means of Maxwell's equations. In addition, however, a wave front can be recognized here. It is also noticeable that in the region of the wave front the force is no longer a central force. This effect is known as bremsstrahlung.
Electromagnetic wave phenomena are therefore not included in Weber electrodynamics. For this reason, Weber's electrodynamics is only applicable in applications in which all involved charges move slowly and uniformly.
Newton's third law in Maxwell and Weber electrodynamics
In Maxwell electrodynamics, Newton's third law does not hold for particles. Instead, particles exert forces on electromagnetic fields, and fields exert forces on particles, but particles do not directly exert forces on other particles. Therefore, two nearby particles do not always experience equal and opposite forces. Related to this, Maxwell electrodynamics predicts that the laws of conservation of momentum and conservation of angular momentum are valid only if the momentum of particles and the momentum of surrounding electromagnetic fields are taken into account. The total momentum of all particles is not necessarily conserved, because the particles may transfer some of their momentum to electromagnetic fields or vice versa. The well-known phenomenon of radiation pressure proves that electromagnetic waves are indeed able to "push" on matter. See Maxwell stress tensor and Poynting vector for further details.
The Weber force law is quite different: All particles, regardless of size and mass, will exactly follow Newton's third law. Therefore, Weber electrodynamics, unlike Maxwell electrodynamics, has conservation of particle momentum and conservation of particle angular momentum.
Potential energy for point charges in Maxwell electrodynamics
In Maxwell's equations the force on a charge from nearby charges can be calculated by combining Jefimenko's equations with the Lorentz force law. The corresponding potential energy is approximately:
where and are the velocities of and , respectively, and where relativistic and retardation effects are omitted for simplicity; see Darwin Lagrangian.
Using these expressions, the regular form of Ampère's law and Faraday's law can be derived. Importantly, Weber electrodynamics does not predict an expression like the Biot–Savart law and testing differences between Ampere's law and the Biot–Savart law is one way to test Weber electrodynamics.
Experimental tests
Limitations
According to present knowledge, Weber electrodynamics is an incomplete theory. The expression of the potential energy () suggests that it is a first part of a Taylor series, i.e. an approximation that is only sufficiently correct for small velocities and very low accelerations. Problematic, however, is that Weber electrodynamics and Maxwell's electrodynamics are not equivalent even under these circumstances.
Since Weber electrodynamics is an approximation that is only valid for low velocities and accelerations, an experimental comparison with Maxwell's electrodynamics is only reasonable if these conditions and requirements are satisfied. In many experiments that disprove Weber electrodynamics, these conditions are not met. Interestingly, experiments that respect the limitations of Weber electrodynamics often show a better agreement of Weber electrodynamics with the measurement results than Maxwell's electrodynamics.
Experiments that do not support Weber electrodynamics
Velocity-dependent tests
The velocity-dependent term in the Weber force could cause a gas escaping from a container to become electrically charged. However, because the electrons used to set these limits are Coulomb bound, renormalization effects may cancel the velocity-dependent corrections. Other searches have spun current-carrying solenoids, observed metals as they cooled, and used superconductors to obtain a large drift velocity. None of these searches have observed any discrepancy from Coulomb's law. Observing the charge of particle beams provides weaker bounds, but tests the velocity-dependent corrections to Maxwell's equations for particles with higher velocities.
Acceleration-dependent tests
Hermann von Helmholtz observed that Weber's electrodynamics predicts that charges in certain configurations can behave as if they had negative inertial mass. Some scientists have, however, disputed Helmholtz's argument. By measuring the oscillation frequency of a neon lamp inside a spherical conductor biased to a high voltage, this can be tested. No significant deviations from Maxwell's theory have been observed.
Relation to quantum electrodynamics
Quantum electrodynamics (QED) is perhaps the most stringently tested theory in physics, with highly nontrivial predictions verified to an accuracy better than 10 parts per billion: See precision tests of QED. Since Maxwell's equations can be derived as the classical limit of the equations of QED, it follows that if QED is correct (as is widely believed by mainstream physicists), then Maxwell's equations and the Lorentz force law are correct too.
References
Further reading
André Koch Torres Assis: Weber's electrodynamics. Kluwer Acad. Publ., Dordrecht 1994, .
Electrodynamics | Weber electrodynamics | [
"Mathematics"
] | 2,591 | [
"Electrodynamics",
"Dynamical systems"
] |
35,658,939 | https://en.wikipedia.org/wiki/Verification%20and%20validation%20of%20computer%20simulation%20models | Verification and validation of computer simulation models is conducted during the development of a simulation model with the ultimate goal of producing an accurate and credible model. "Simulation models are increasingly being used to solve problems and to aid in decision-making. The developers and users of these models, the decision makers using information obtained from the results of these models, and the individuals affected by decisions based on such models are all rightly concerned with whether a model and its results are "correct". This concern is addressed through verification and validation of the simulation model.
Simulation models are approximate imitations of real-world systems and they never exactly imitate the real-world system. Due to that, a model should be verified and validated to the degree needed for the model's intended purpose or application.
The verification and validation of a simulation model starts after functional specifications have been documented and initial model development has been completed. Verification and validation is an iterative process that takes place throughout the development of a model.
Verification
In the context of computer simulation, verification of a model is the process of confirming that it is correctly implemented with respect to the conceptual model (it matches specifications and assumptions deemed acceptable for the given purpose of application).
During verification the model is tested to find and fix errors in the implementation of the model.
Various processes and techniques are used to assure the model matches specifications and assumptions with respect to the model concept.
The objective of model verification is to ensure that the implementation of the model is correct.
There are many techniques that can be utilized to verify a model.
These include, but are not limited to, having the model checked by an expert, making logic flow diagrams that include each logically possible action, examining the model output for reasonableness under a variety of settings of the input parameters, and using an interactive debugger.
Many software engineering techniques used for software verification are applicable to simulation model verification.
Validation
Validation checks the accuracy of the model's representation of the real system. Model validation is defined to mean "substantiation that a computerized model within its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the model". A model should be built for a specific purpose or set of objectives and its validity determined for that purpose.
There are many approaches that can be used to validate a computer model. The approaches range from subjective reviews to objective statistical tests. One approach that is commonly used is to have the model builders determine validity of the model through a series of tests.
Naylor and Finger [1967] formulated a three-step approach to model validation that has been widely followed:
Step 1. Build a model that has high face validity.
Step 2. Validate model assumptions.
Step 3. Compare the model input-output transformations to corresponding input-output transformations for the real system.
Face validity
A model that has face validity appears to be a reasonable imitation of a real-world system to people who are knowledgeable of the real world system. Face validity is tested by having users and people knowledgeable with the system examine model output for reasonableness and in the process identify deficiencies. An added advantage of having the users involved in validation is that the model's credibility to the users and the user's confidence in the model increases. Sensitivity to model inputs can also be used to judge face validity. For example, if a simulation of a fast food restaurant drive through was run twice with customer arrival rates of 20 per hour and 40 per hour then model outputs such as average wait time or maximum number of customers waiting would be expected to increase with the arrival rate.
Validation of model assumptions
Assumptions made about a model generally fall into two categories: structural assumptions about how system works and data assumptions. Also we can consider the simplification assumptions that are those that we use to simplify the reality.
Structural assumptions
Assumptions made about how the system operates and how it is physically arranged are structural assumptions. For example, the number of servers in a fast food drive through lane and if there is more than one how are they utilized? Do the servers work in parallel where a customer completes a transaction by visiting a single server or does one server take orders and handle payment while the other prepares and serves the order. Many structural problems in the model come from poor or incorrect assumptions. If possible the workings of the actual system should be closely observed to understand how it operates. The systems structure and operation should also be verified with users of the actual system.
Data assumptions
There must be a sufficient amount of appropriate data available to build a conceptual model and validate a model. Lack of appropriate data is often the reason attempts to validate a model fail. Data should be verified to come from a reliable source. A typical error is assuming an inappropriate statistical distribution for the data. The assumed statistical model should be tested using goodness of fit tests and other techniques. Examples of goodness of fit tests are the Kolmogorov–Smirnov test and the chi-square test. Any outliers in the data should be checked.
Simplification assumptions
Are those assumptions that we know that are not true, but are needed to simplify the problem we want to solve. The use of this assumptions must be restricted to assure that the model is correct enough to serve as an answer for the problem we want to solve.
Validating input-output transformations
The model is viewed as an input-output transformation for these tests. The validation test consists of comparing outputs from the system under consideration to model outputs for the same set of input conditions. Data recorded while observing the system must be available in order to perform this test. The model output that is of primary interest should be used as the measure of performance. For example, if system under consideration is a fast food drive through where input to model is customer arrival time and the output measure of performance is average customer time in line, then the actual arrival time and time spent in line for customers at the drive through would be recorded. The model would be run with the actual arrival times and the model average time in line would be compared with the actual average time spent in line using one or more tests.
Hypothesis testing
Statistical hypothesis testing using the t-test can be used as a basis to accept the model as valid or reject it as invalid.
The hypothesis to be tested is
H0 the model measure of performance = the system measure of performance
versus
H1 the model measure of performance ≠ the system measure of performance.
The test is conducted for a given sample size and level of significance or α. To perform the test a number n statistically independent runs of the model are conducted and an average or expected value, E(Y), for the variable of interest is produced. Then the test statistic, t0 is computed for the given α, n, E(Y) and the observed value for the system μ0
and the critical value for α and n-1 the degrees of freedom
is calculated.
If
reject H0, the model needs adjustment.
There are two types of error that can occur using hypothesis testing, rejecting a valid model called type I error or "model builders risk" and accepting an invalid model called Type II error, β, or "model user's risk". The level of significance or α is equal the probability of type I error. If α is small then rejecting the null hypothesis is a strong conclusion. For example, if α = 0.05 and the null hypothesis is rejected there is only a 0.05 probability of rejecting a model that is valid. Decreasing the probability of a type II error is very important. The probability of correctly detecting an invalid model is 1 - β. The probability of a type II error is dependent of the sample size and the actual difference between the sample value and the observed value. Increasing the sample size decreases the risk of a type II error.
Model accuracy as a range
A statistical technique where the amount of model accuracy is specified as a range has recently been developed. The technique uses hypothesis testing to accept a model if the difference between a model's variable of interest and a system's variable of interest is within a specified range of accuracy. A requirement is that both the system data and model data be approximately Normally Independent and Identically Distributed (NIID). The t-test statistic is used in this technique. If the mean of the model is μm and the mean of system is μs then the difference between the model and the system is D = μm - μs. The hypothesis to be tested is if D is within the acceptable range of accuracy. Let L = the lower limit for accuracy and U = upper limit for accuracy. Then
H0 L ≤ D ≤ U
versus
H1 D < L or D > U
is to be tested.
The operating characteristic (OC) curve is the probability that the null hypothesis is accepted when it is true. The OC curve characterizes the probabilities of both type I and II errors. Risk curves for model builder's risk and model user's can be developed from the OC curves. Comparing curves with fixed sample size tradeoffs between model builder's risk and model user's risk can be seen easily in the risk curves. If model builder's risk, model user's risk, and the upper and lower limits for the range of accuracy are all specified then the sample size needed can be calculated.
Confidence intervals
Confidence intervals can be used to evaluate if a model is "close enough" to a system for some variable of interest. The difference between the known model value, μ0, and the system value, μ, is checked to see if it is less than a value small enough that the model is valid with respect that variable of interest. The value is denoted by the symbol ε. To perform the test a number, n, statistically independent runs of the model are conducted and a mean or expected value, E(Y) or μ for simulation output variable of interest Y, with a standard deviation S is produced. A confidence level is selected, 100(1-α). An interval, [a,b], is constructed by
,
where
is the critical value from the t-distribution for the given level of significance and n-1 degrees of freedom.
If |a-μ0| > ε and |b-μ0| > ε then the model needs to be calibrated since in both cases the difference is larger than acceptable.
If |a-μ0| < ε and |b-μ0| < ε then the model is acceptable as in both cases the error is close enough.
If |a-μ0| < ε and |b-μ0| > ε or vice versa then additional runs of the model are needed to shrink the interval.
Graphical comparisons
If statistical assumptions cannot be satisfied or there is insufficient data for the system a graphical comparisons of model outputs to system outputs can be used to make a subjective decisions, however other objective tests are preferable.
ASME Standards
Documents and standards involving verification and validation of computational modeling and simulation are developed by the American Society of Mechanical Engineers (ASME) Verification and Validation (V&V) Committee. ASME V&V 10 provides guidance in assessing and increasing the credibility of computational solid mechanics models through the processes of verification, validation, and uncertainty quantification. ASME V&V 10.1 provides a detailed example to illustrate the concepts described in ASME V&V 10. ASME V&V 20 provides a detailed methodology for validating computational simulations as applied to fluid dynamics and heat transfer. ASME V&V 40 provides a framework for establishing model credibility requirements for computational modeling, and presents examples specific in the medical device industry.
See also
Verification and validation
Software verification and validation
References
Formal methods | Verification and validation of computer simulation models | [
"Engineering"
] | 2,371 | [
"Software engineering",
"Formal methods"
] |
35,662,074 | https://en.wikipedia.org/wiki/Downstream%20promoter%20element | In molecular biology, a downstream promoter element (DPE) is a core promoter element. Like all core promoters, the DPE plays an important role in the initiation of gene transcription by RNA polymerase II. The DPE was first described by T. W. Burke and James T. Kadonaga in Drosophila melanogaster at the University of California, San Diego in 1996. It is also present in other species including humans, but not Saccharomyces cerevisiae.
Together with the initiator motif (Inr), another core promoter element, the DPE is recognized by the transcription factor II D (TFIID) subunits TAF6 and TAF9. It has been shown that DPE-dependent basal transcription depends highly on the Inr (and vice versa) and on correct spacing between the two elements.
The DPE consensus sequence was originally thought to be RGWCGTG, however more recent studies have suggested it to be the similar but more general sequence RGWYV(T). It is located about 28–33 nucleotides downstream of the transcription start site.
Occurrence
It has been shown that the DPE is about as widely used as the TATA box in D. melanogaster. While a DPE was found in many promoters that do not contain a TATA box, there are also promoters that contain both a TATA box and a DPE.
The promoters of nearly all Hox genes of D. melanogaster, with the exception of the evolutionarily most recent genes, Ubx and Abd-A, contain a DPE motif and lack a TATA box. Drosophila promoters containing the DPE sequence include Abd-B, Antp P2, bride of sevenless, brown, caudal, E74, E75, engrailed, Gsα, labial, nmMHC, ras2, singed, stellate, and white. In organisms other than D. melanogaster, the promoter of the human and mouse IRF1 gene has been found to contain a DPE consensus sequence at the appropriate distance from the transcription start site. This promoter, too, does not contain a TATA box.
DPE has also been reported to play role in primitive Eukaryote Entamoeba histolytica.
Notes
References
DNA
Regulatory sequences | Downstream promoter element | [
"Chemistry"
] | 487 | [
"Gene expression",
"Regulatory sequences"
] |
35,666,184 | https://en.wikipedia.org/wiki/Muramyl%20ligase | The bacterial cell wall provides strength and rigidity to counteract internal osmotic pressure, and protection against the environment. The peptidoglycan layer gives the cell wall its strength, and helps maintain the overall shape of the cell. The basic peptidoglycan structure of both Gram-positive and Gram-negative bacteria comprises a sheet of glycan chains connected by short cross-linking polypeptides. Biosynthesis of peptidoglycan is a multi-step (11-12 steps) process comprising three main stages:
formation of UDP-N-acetylmuramic acid (UDPMurNAc) from N-acetylglucosamine (GlcNAc).
addition of a short polypeptide chain to the UDPMurNAc.
addition of a second GlcNAc to the disaccharide-pentapeptide building block and transport of this unit through the cytoplasmic membrane and incorporation into the growing peptidoglycan layer.
Stage two involves four key Mur ligase enzymes: MurC, MurD, MurE (EC) and MurF (EC). These four Mur ligases are responsible for the successive additions of L-alanine, D-glutamate, meso-diaminopimelate or L-lysine, and D-alanyl-D-alanine to UDP-N-acetylmuramic acid. All four Mur ligases are topologically similar to one another, even though they display low sequence identity. They are each composed of three domains: an N-terminal Rossmann-fold domain responsible for binding the UDPMurNAc substrate; a central domain (similar to ATP-binding domains of several ATPases and GTPases); and a C-terminal domain (similar to dihydrofolate reductase fold) that appears to be associated with binding the incoming amino acid. The conserved sequence motifs found in the four Mur enzymes also map to other members of the Mur ligase family, including folylpolyglutamate synthetase, cyanophycin synthetase and the capB enzyme from Bacillales.
This family includes UDP-N-acetylmuramate-L-alanine ligase (MurC), UDP-N-acetylmuramoylalanyl-D-glutamate-2,6-diaminopimelate ligase (MurE), and UDP-N-acetylmuramoyl-tripeptide-D-alanyl-D-alanine ligase (MurF). This entry also includes folylpolyglutamate synthase that transfers glutamate to folylpolyglutamate and cyanophycin synthetase that catalyses the biosynthesis of the cyanobacterial reserve material multi-L-arginyl-poly-L-aspartate (cyanophycin).
References
Protein domains
EC 6.3.2
Enzymes of known structure | Muramyl ligase | [
"Biology"
] | 644 | [
"Protein domains",
"Protein classification"
] |
35,666,519 | https://en.wikipedia.org/wiki/Clathrin%20adaptor%20protein | Clathrin adaptor proteins, also known as adaptins, are vesicular transport adaptor proteins associated with clathrin. The association between adaptins and clathrin are important for vesicular cargo selection and transporting. Clathrin coats contain both clathrin (acts as a scaffold) and adaptor complexes that link clathrin to receptors in coated vesicles. Clathrin-associated protein complexes are believed to interact with the cytoplasmic tails of membrane proteins, leading to their selection and concentration. Therefore, adaptor proteins are responsible for the recruitment of cargo molecules into a growing clathrin-coated pits. The two major types of clathrin adaptor complexes are the heterotetrameric vesicular transport adaptor proteins (AP1-5), and the monomeric GGA (Golgi-localising, Gamma-adaptin ear homology, ARF-binding proteins) adaptors. Adaptins are distantly related to the other main type of vesicular transport proteins, the coatomer subunits, sharing between 16% and 26% of their amino acid sequence.
Adaptor protein (AP) complexes are found in coated vesicles and clathrin-coated pits. AP complexes connect cargo proteins and lipids to clathrin at vesicle budding sites, as well as binding accessory proteins that regulate coat assembly and disassembly (such as AP180, epsins and auxilin). There are different AP complexes in mammals. AP1 is responsible for the transport of lysosomal hydrolases between the trans-Golgi network, and endosomes. AP2 adaptor complex associates with the plasma membrane and is responsible for endocytosis. AP3 is responsible for protein trafficking to lysosomes and other related organelles. AP4 is less well characterised. AP complexes are heterotetramers composed of two large subunits (adaptins), a medium subunit (mu) and a small subunit (sigma). For example, in AP1 these subunits are gamma-1-adaptin, beta-1-adaptin, mu-1 and sigma-1, while in AP2 they are alpha-adaptin, beta-2-adaptin, mu-2 and sigma-2. Each subunit has a specific function. Adaptins recognise and bind to clathrin through their hinge region (clathrin box), and recruit accessory proteins that modulate AP function through their C-terminal ear (appendage) domains. Mu recognises tyrosine-based sorting signals within the cytoplasmic domains of transmembrane cargo proteins. One function of clathrin and AP2 complex-mediated endocytosis is to regulate the number of GABAA receptors available at the cell surface .
See also
List of adaptins
References
External links
Peripheral membrane proteins
Protein families | Clathrin adaptor protein | [
"Biology"
] | 601 | [
"Protein families",
"Protein classification"
] |
35,669,023 | https://en.wikipedia.org/wiki/Tensors%20in%20curvilinear%20coordinates | Curvilinear coordinates can be formulated in tensor calculus, with important applications in physics and engineering, particularly for describing transportation of physical quantities and deformation of matter in fluid mechanics and continuum mechanics.
Vector and tensor algebra in three-dimensional curvilinear coordinates
Elementary vector and tensor algebra in curvilinear coordinates is used in some of the older scientific literature in mechanics and physics and can be indispensable to understanding work from the early and mid 1900s, for example the text by Green and Zerna. Some useful relations in the algebra of vectors and second-order tensors in curvilinear coordinates are given in this section. The notation and contents are primarily from Ogden, Naghdi, Simmonds, Green and Zerna, Basar and Weichert, and Ciarlet.
Coordinate transformations
Consider two coordinate systems with coordinate variables and , which we shall represent in short as just and respectively and always assume our index runs from 1 through 3. We shall assume that these coordinates systems are embedded in the three-dimensional euclidean space. Coordinates and may be used to explain each other, because as we move along the coordinate line in one coordinate system we can use the other to describe our position. In this way Coordinates and are functions of each other
for
which can be written as
for
These three equations together are also called a coordinate transformation from to . Let us denote this transformation by . We will therefore represent the transformation from the coordinate system with coordinate variables to the coordinate system with coordinates as:
Similarly we can represent as a function of as follows:
for
and we can write the free equations more compactly as
for
These three equations together are also called a coordinate transformation from to . Let us denote this transformation by . We will represent the transformation from the coordinate system with coordinate variables to the coordinate system with coordinates as:
If the transformation is bijective then we call the image of the transformation, namely , a set of admissible coordinates for . If is linear the coordinate system will be called an affine coordinate system, otherwise is called a curvilinear coordinate system.
The Jacobian
As we now see that the Coordinates and are functions of each other, we can take the derivative of the coordinate variable with respect to the coordinate variable .
Consider
for , these derivatives can be arranged in a matrix, say , in which is the element in the -th row and -th column
The resultant matrix is called the Jacobian matrix.
Vectors in curvilinear coordinates
Let (b1, b2, b3) be an arbitrary basis for three-dimensional Euclidean space. In general, the basis vectors are neither unit vectors nor mutually orthogonal. However, they are required to be linearly independent. Then a vector v can be expressed as
The components vk are the contravariant components of the vector v.
The reciprocal basis (b1, b2, b3) is defined by the relation
where δi j is the Kronecker delta.
The vector v can also be expressed in terms of the reciprocal basis:
The components vk are the covariant components of the vector .
Second-order tensors in curvilinear coordinates
A second-order tensor can be expressed as
The components Sij are called the contravariant components, Si j the mixed right-covariant components, Si j the mixed left-covariant components, and Sij the covariant components of the second-order tensor.
Metric tensor and relations between components
The quantities gij, gij are defined as
From the above equations we have
The components of a vector are related by
Also,
The components of the second-order tensor are related by
The alternating tensor
In an orthonormal right-handed basis, the third-order alternating tensor is defined as
In a general curvilinear basis the same tensor may be expressed as
It can be shown that
Now,
Hence,
Similarly, we can show that
Vector operations
Identity map
The identity map I defined by can be shown to be:
Scalar (dot) product
The scalar product of two vectors in curvilinear coordinates is
Vector (cross) product
The cross product of two vectors is given by:
where εijk is the permutation symbol and ei is a Cartesian basis vector. In curvilinear coordinates, the equivalent expression is:
where is the third-order alternating tensor. The cross product of two vectors is given by:
where εijk is the permutation symbol and is a Cartesian basis vector. Therefore,
and
Hence,
Returning to the vector product and using the relations:
gives us:
Tensor operations
Identity map
The identity map defined by can be shown to be
Action of a second-order tensor on a vector
The action can be expressed in curvilinear coordinates as
Inner product of two second-order tensors
The inner product of two second-order tensors can be expressed in curvilinear coordinates as
Alternatively,
Determinant of a second-order tensor
If is a second-order tensor, then the determinant is defined by the relation
where are arbitrary vectors and
Relations between curvilinear and Cartesian basis vectors
Let (e1, e2, e3) be the usual Cartesian basis vectors for the Euclidean space of interest and let
where Fi is a second-order transformation tensor that maps ei to bi. Then,
From this relation we can show that
Let be the Jacobian of the transformation. Then, from the definition of the determinant,
Since
we have
A number of interesting results can be derived using the above relations.
First, consider
Then
Similarly, we can show that
Therefore, using the fact that ,
Another interesting relation is derived below. Recall that
where A is a, yet undetermined, constant. Then
This observation leads to the relations
In index notation,
where is the usual permutation symbol.
We have not identified an explicit expression for the transformation tensor F because an alternative form of the mapping between curvilinear and Cartesian bases is more useful. Assuming a sufficient degree of smoothness in the mapping (and a bit of abuse of notation), we have
Similarly,
From these results we have
and
Vector and tensor calculus in three-dimensional curvilinear coordinates
Simmonds, in his book on tensor analysis, quotes Albert Einstein saying
Vector and tensor calculus in general curvilinear coordinates is used in tensor analysis on four-dimensional curvilinear manifolds in general relativity, in the mechanics of curved shells, in examining the invariance properties of Maxwell's equations which has been of interest in metamaterials and in many other fields.
Some useful relations in the calculus of vectors and second-order tensors in curvilinear coordinates are given in this section. The notation and contents are primarily from Ogden, Simmonds, Green and Zerna, Basar and Weichert, and Ciarlet.
Basic definitions
Let the position of a point in space be characterized by three coordinate variables .
The coordinate curve q1 represents a curve on which q2, q3 are constant. Let x be the position vector of the point relative to some origin. Then, assuming that such a mapping and its inverse exist and are continuous, we can write
The fields ψi(x) are called the curvilinear coordinate functions of the curvilinear coordinate system ψ(x) = φ−1(x).
The qi coordinate curves are defined by the one-parameter family of functions given by
with qj, qk fixed.
Tangent vector to coordinate curves
The tangent vector to the curve xi at the point xi(α) (or to the coordinate curve qi at the point x) is
Gradient
Scalar field
Let f(x) be a scalar field in space. Then
The gradient of the field f is defined by
where c is an arbitrary constant vector. If we define the components ci of c are such that
then
If we set , then since , we have
which provides a means of extracting the contravariant component of a vector c.
If bi is the covariant (or natural) basis at a point, and if bi is the contravariant (or reciprocal) basis at that point, then
A brief rationale for this choice of basis is given in the next section.
Vector field
A similar process can be used to arrive at the gradient of a vector field f(x). The gradient is given by
If we consider the gradient of the position vector field r(x) = x, then we can show that
The vector field bi is tangent to the qi coordinate curve and forms a natural basis at each point on the curve. This basis, as discussed at the beginning of this article, is also called the covariant curvilinear basis. We can also define a reciprocal basis, or contravariant curvilinear basis, bi. All the algebraic relations between the basis vectors, as discussed in the section on tensor algebra, apply for the natural basis and its reciprocal at each point x.
Since c is arbitrary, we can write
Note that the contravariant basis vector bi is perpendicular to the surface of constant ψi and is given by
Christoffel symbols of the first kind
The Christoffel symbols of the first kind are defined as
To express Γijk in terms of gij we note that
Since bi,j = bj,i we have Γijk = Γjik. Using these to rearrange the above relations gives
Christoffel symbols of the second kind
The Christoffel symbols of the second kind are defined as
in which
This implies that
Other relations that follow are
Another particularly useful relation, which shows that the Christoffel symbol depends only on the metric tensor and its derivatives, is
Explicit expression for the gradient of a vector field
The following expressions for the gradient of a vector field in curvilinear coordinates are quite useful.
Representing a physical vector field
The vector field v can be represented as
where are the covariant components of the field, are the physical components, and (no summation)
is the normalized contravariant basis vector.
Second-order tensor field
The gradient of a second order tensor field can similarly be expressed as
Explicit expressions for the gradient
If we consider the expression for the tensor in terms of a contravariant basis, then
We may also write
Representing a physical second-order tensor field
The physical components of a second-order tensor field can be obtained by using a normalized contravariant basis, i.e.,
where the hatted basis vectors have been normalized. This implies that (again no summation)
Divergence
Vector field
The divergence of a vector field ()is defined as
In terms of components with respect to a curvilinear basis
An alternative equation for the divergence of a vector field is frequently used. To derive this relation recall that
Now,
Noting that, due to the symmetry of ,
we have
Recall that if [gij] is the matrix whose components are gij, then the inverse of the matrix is . The inverse of the matrix is given by
where Aij are the Cofactor matrix of the components gij. From matrix algebra we have
Hence,
Plugging this relation into the expression for the divergence gives
A little manipulation leads to the more compact form
Second-order tensor field
The divergence of a second-order tensor field is defined using
where a is an arbitrary constant vector.
In curvilinear coordinates,
Laplacian
Scalar field
The Laplacian of a scalar field φ(x) is defined as
Using the alternative expression for the divergence of a vector field gives us
Now
Therefore,
Curl of a vector field
The curl of a vector field v in covariant curvilinear coordinates can be written as
where
Orthogonal curvilinear coordinates
Assume, for the purposes of this section, that the curvilinear coordinate system is orthogonal, i.e.,
or equivalently,
where . As before, are covariant basis vectors and bi, bj are contravariant basis vectors. Also, let (e1, e2, e3) be a background, fixed, Cartesian basis. A list of orthogonal curvilinear coordinates is given below.
Metric tensor in orthogonal curvilinear coordinates
Let r(x) be the position vector of the point x with respect to the origin of the coordinate system. The notation can be simplified by noting that x = r(x). At each point we can construct a small line element dx. The square of the length of the line element is the scalar product dx • dx and is called the metric of the space. Recall that the space of interest is assumed to be Euclidean when we talk of curvilinear coordinates. Let us express the position vector in terms of the background, fixed, Cartesian basis, i.e.,
Using the chain rule, we can then express dx in terms of three-dimensional orthogonal curvilinear coordinates (q1, q2, q3) as
Therefore, the metric is given by
The symmetric quantity
is called the fundamental (or metric) tensor of the Euclidean space in curvilinear coordinates.
Note also that
where hij are the Lamé coefficients.
If we define the scale factors, hi, using
we get a relation between the fundamental tensor and the Lamé coefficients.
Example: Polar coordinates
If we consider polar coordinates for R2, note that
(r, θ) are the curvilinear coordinates, and the Jacobian determinant of the transformation (r,θ) → (r cos θ, r sin θ) is r.
The orthogonal basis vectors are br = (cos θ, sin θ), bθ = (−r sin θ, r cos θ). The normalized basis vectors are er = (cos θ, sin θ), eθ = (−sin θ, cos θ) and the scale factors are hr = 1 and hθ= r. The fundamental tensor is g11 =1, g22 =r2, g12 = g21 =0.
Line and surface integrals
If we wish to use curvilinear coordinates for vector calculus calculations, adjustments need to be made in the calculation of line, surface and volume integrals. For simplicity, we again restrict the discussion to three dimensions and orthogonal curvilinear coordinates. However, the same arguments apply for -dimensional problems though there are some additional terms in the expressions when the coordinate system is not orthogonal.
Line integrals
Normally in the calculation of line integrals we are interested in calculating
where x(t) parametrizes C in Cartesian coordinates.
In curvilinear coordinates, the term
by the chain rule. And from the definition of the Lamé coefficients,
and thus
Now, since when , we have
and we can proceed normally.
Surface integrals
Likewise, if we are interested in a surface integral, the relevant calculation, with the parameterization of the surface in Cartesian coordinates is:
Again, in curvilinear coordinates, we have
and we make use of the definition of curvilinear coordinates again to yield
Therefore,
where is the permutation symbol.
In determinant form, the cross product in terms of curvilinear coordinates will be:
Grad, curl, div, Laplacian
In orthogonal curvilinear coordinates of 3 dimensions, where
one can express the gradient of a scalar or vector field as
For an orthogonal basis
The divergence of a vector field can then be written as
Also,
Therefore,
We can get an expression for the Laplacian in a similar manner by noting that
Then we have
The expressions for the gradient, divergence, and Laplacian can be directly extended to n-dimensions.
The curl of a vector field is given by
where εijk is the Levi-Civita symbol.
Example: Cylindrical polar coordinates
For cylindrical coordinates we have
and
where
Then the covariant and contravariant basis vectors are
where are the unit vectors in the directions.
Note that the components of the metric tensor are such that
which shows that the basis is orthogonal.
The non-zero components of the Christoffel symbol of the second kind are
Representing a physical vector field
The normalized contravariant basis vectors in cylindrical polar coordinates are
and the physical components of a vector v are
Gradient of a scalar field
The gradient of a scalar field, f(x), in cylindrical coordinates can now be computed from the general expression in curvilinear coordinates and has the form
Gradient of a vector field
Similarly, the gradient of a vector field, v(x), in cylindrical coordinates can be shown to be
Divergence of a vector field
Using the equation for the divergence of a vector field in curvilinear coordinates, the divergence in cylindrical coordinates can be shown to be
Laplacian of a scalar field
The Laplacian is more easily computed by noting that . In cylindrical polar coordinates
Hence,
Representing a physical second-order tensor field
The physical components of a second-order tensor field are those obtained when the tensor is expressed in terms of a normalized contravariant basis. In cylindrical polar coordinates these components are:
Gradient of a second-order tensor field
Using the above definitions we can show that the gradient of a second-order tensor field in cylindrical polar coordinates can be expressed as
Divergence of a second-order tensor field
The divergence of a second-order tensor field in cylindrical polar coordinates can be obtained from the expression for the gradient by collecting terms where the scalar product of the two outer vectors in the dyadic products is nonzero. Therefore,
See also
Covariance and contravariance
Basic introduction to the mathematics of curved spacetime
Orthogonal coordinates
Frenet–Serret formulas
Covariant derivative
Tensor derivative (continuum mechanics)
Curvilinear perspective
Del in cylindrical and spherical coordinates
References
Notes
Further reading
External links
Derivation of Unit Vectors in Curvilinear Coordinates
MathWorld's page on Curvilinear Coordinates
Prof. R. Brannon's E-Book on Curvilinear Coordinates
Coordinate systems
3 | Tensors in curvilinear coordinates | [
"Mathematics",
"Engineering"
] | 3,740 | [
"Tensors",
"Metric tensors",
"Coordinate systems"
] |
5,149,747 | https://en.wikipedia.org/wiki/Waste%20minimisation | Waste minimisation is a set of processes and practices intended to reduce the amount of waste produced. By reducing or eliminating the generation of harmful and persistent wastes, waste minimisation supports efforts to promote a more sustainable society. Waste minimisation involves redesigning products and processes and/or changing societal patterns of consumption and production.
The most environmentally resourceful, economically efficient, and cost effective way to manage waste often is to not have to address the problem in the first place. Managers see waste minimisation as a primary focus for most waste management strategies. Proper waste treatment and disposal can require a significant amount of time and resources; therefore, the benefits of waste minimisation can be considerable if carried out in an effective, safe and sustainable manner.
Traditional waste management focuses on processing waste after it is created, concentrating on re-use, recycling, and waste-to-energy conversion. Waste minimisation involves efforts to avoid creating the waste during manufacturing. To effectively implement waste minimisation the manager requires knowledge of the production process, cradle-to-grave analysis (the tracking of materials from their extraction to their return to earth) and details of the composition of the waste.
The main sources of waste vary from country to country. In the UK, most waste comes from the construction and demolition of buildings, followed by mining and quarrying, industry and commerce. Household waste constitutes a relatively small proportion of all waste. Industrial waste is often tied to requirements in the supply chain. For example, a company handling a product may insist that it should be shipped using particular packing because it fits downstream needs.
Proponents of waste minimisation state that manufactured products at the end of their useful life should be utilised resource for recycling and reuse rather than waste.
Benefits
Waste minimisation can protect the environment and often turns out to have positive economic benefits. Waste minimisation can improve:
Efficient production practices – waste minimisation can achieve more output of product per unit of input of raw materials.
Economic returns – more efficient use of products means reduced costs of purchasing new materials, improving the financial performance of a company.
Public image – the environmental profile of a company is an important part of its overall reputation and waste minimisation reflects a proactive movement towards environmental protection.
Quality of products produced – innovations and technological practices can reduce waste generation and improve the quality of the inputs in the production phase.
Environmental responsibility – minimising or eliminating waste generation makes it easier to meet targets of environmental regulations, policies, and standards; the environmental impact of waste will be reduced.
Industries
In industry, using more efficient manufacturing processes and better materials generally reduces the production of waste. The application of waste minimisation techniques has led to the development of innovative and commercially successful replacement products.
Waste minimisation efforts often require investment, which is usually compensated by the savings. However, waste reduction in one part of the production process may create waste production in another part.
Overpackaging is excess packaging. Eliminating it can result in source reduction, reducing waste before it is generated by proper package design and practice. Use of minimised packaging is key to working toward sustainable packaging.
Processes
Reuse of scrap material
Scraps can be immediately re-incorporated at the beginning of the manufacturing line so that they do not become a waste product. Many industries routinely do this; for example, paper mills return any damaged rolls to the beginning of the production line, and in the manufacture of plastic items, off-cuts and scrap are re-incorporated into new products.
Improved quality control and process monitoring
Steps can be taken to ensure that the number of reject batches is kept to a minimum. This is achieved by increasing the frequency of inspection and the number of points of inspection. For example, installing automated continuous monitoring equipment can help to identify production problems at an early stage.
Waste exchanges
This is where the waste product of one process becomes the raw material for a second process. Waste exchanges represent another way of reducing waste disposal volumes for waste that cannot be eliminated.
Ship to point of use
This involves making deliveries of incoming raw materials or components direct to the point where they are assembled or used in the manufacturing process to minimise handling and the use of protective wrappings or enclosures (example: Fish-booking).
Zero waste
This is a whole systems approach that aims to eliminate waste at the source and at all points down the supply chain, with the intention of producing no waste. It is a design philosophy which emphasizes waste prevention as opposed to end of pipe waste management. Since, globally speaking, waste as such, however minimal, can never be prevented (there will always be an end-of-life even for recycled products and materials), a related goal is prevention of pollution.
Minimalism
Minimalism often refers to the concepts of art and music, even though a minimal lifestyle could make a huge impact for waste management and producing zero waste, can reduce which courses landfill and environment pollution. When the endless consumption is reduced to minimum of only necessary consumption, the careless production towards the demand will be reduced. A minimal lifestyle can impact the climate justice in a way by reducing the waste. Joshua Fields Millburn and Ryan Nicodemus directed and produced a movie called Minimalism: A Documentary that showcased the idea of minimal living in the modern world.
Product design
Universal connectors
Utilizing a charger port that can be used by any phone. The implementation of USB-C to reduce excess wires that end up in the waste that give off toxic chemicals that harm the planet.
Reusable shopping bags
Reusable bags are a visible form of re-use, and some stores offer a "bag credit" for re-usable shopping bags, although at least one chain reversed its policy, claiming "it was just a temporary bonus". In contrast, one study suggests that a bag tax is a more effective incentive than a similar discount. (Of note, the before/after study compared a circumstance in which some stores offered a discount vs. a circumstance in which all stores applying the tax.) While there is a minor inconvenience involved, this may remedy itself, as reusable bags are generally more convenient for carrying groceries.
Households
This section details some waste minimisation techniques for householders.
Appropriate amounts and sizes can be chosen when purchasing goods; buying large containers of paint for a small decorating job or buying larger amounts of food than can be consumed create unnecessary waste. Also, if a pack or can is to be thrown away, any remaining contents must be removed before the container can be recycled.
Home composting, the practice of turning kitchen and garden waste into compost can be considered waste minimisation.
The resources that households use can be reduced considerably by using electricity thoughtfully (e.g. turning off lights and equipment when it is not needed) and by reducing the number of car journeys made. Individuals can reduce the amount of waste they create by buying fewer products and by buying products which last longer. Mending broken or worn items of clothing or equipment also contributes to minimising household waste. Individuals can minimise their water usage, and walk or cycle to their destination rather than using their car to save fuel and cut down emissions.
In a domestic situation, the potential for minimisation is often dictated by lifestyle. Some people may view it as wasteful to purchase new products solely to follow fashion trends when the older products are still usable. Adults working full-time have little free time, and so may have to purchase more convenient foods that require little preparation, or prefer disposable nappies if there is a baby in the family.
The amount of waste an individual produces is a small portion of all waste produced by society, and personal waste reduction can only make a small impact on overall waste volumes. Yet, influence on policy can be exerted in other areas. Increased consumer awareness of the impact and power of certain purchasing decisions allows industry and individuals to change the total resource consumption. Consumers can influence manufacturers and distributors by avoiding buying products that do not have eco-labelling, which is currently not mandatory, or choosing products that minimise the use of packaging. In the UK, PullApart combines both environmental and consumer packaging surveys, in a curbside packaging recycling classification system to minimise waste. Where reuse schemes are available, consumers can be proactive and use them.
Healthcare facilities
Healthcare establishments are massive producers of waste. The major sources of healthcare waste are: hospitals, laboratories and research centres, mortuary and autopsy centres, animal research and testing laboratories, blood banks and collection services, and nursing homes for the elderly.
Waste minimisation can offer many opportunities to these establishments to use fewer resources, be less wasteful and generate less hazardous waste. Good management and control practices among health-care facilities can have a significant effect on the reduction of waste generated each day.
Practices
There are many examples of more efficient practices that can encourage waste minimisation in healthcare establishments and research facilities.
Source reduction
Purchasing reductions which ensures the selection of supplies that are less wasteful or less hazardous.
The use of physical rather than chemical cleaning methods such as steam disinfection instead of chemical disinfection.
Preventing the unnecessary wastage of products in nursing and cleaning activities.
Management and control measures at hospital level
Centralized purchasing of hazardous chemicals.
Monitoring the flow of chemicals within the health care facility from receipt as a raw material to disposal as a hazardous waste.
The careful separation of waste matter to help minimise the quantities of hazardous waste and disposal.
Stock management of chemical and pharmaceutical products
Frequent ordering of relatively small quantities rather than large quantities at one time.
Using the oldest batch of a product first to avoid expiration dates and unnecessary waste.
Using all the contents of a container containing hazardous waste.
Checking the expiry date of all products at the time of delivery.
Culture of packaging reuse
In some countries, such as Germany, people have established a deeper culture of packaging waste reduction than in other countries. The Mach Mehrweg Pool (“Make Reuse Pool”) is an effort initially conceived by milk producers to harmonize and share reusable milk containers, which in more recent years was expanded to include reusable packaging for additional types of food, such as coffee. People have devised ways to bring back to stores containers for yoghurt, cooking oil and marmalade and for many other types of food and to refill them there.
Legal mandates
The European Union (EU) has set packaging reduction targets that require member states to reduce packaging, especially plastic packaging. Some types of single use plastic packaging, including packaging for unprocessed fresh fruits and vegetables; for foods and beverages filled and consumed in cafés and restaurants; for individual portions (for example, sugar, condiments, sauces); and for miniature packaging for toiletries; as well as shrink-wrap for suitcases in airports, would be banned effective January 1, 2030. More generally, the EU has set mandatory reduction targets for plastic packaging have been set as follows: 5% by 2030, 10% by 2035 and 15% by 2040.
See also
Bottle cutting
Cleaner production
Durable good
Eco-action
Ecological economics
Ecological sanitation
European Week for Waste Reduction
Extended producer responsibility
Food waste
Green consumption
Green economy
Household hazardous waste
Life-cycle assessment
List of waste management acronyms
Litter
Miniwaste
Pallet crafts
Planned obsolescence
Recycling
Reusable launch vehicle
Reusable packaging
Reuse of bottles
Reuse of human excreta
Reuse
Right to repair
Source reduction
Sustainable sanitation
Upcycling
Waste hierarchy
Waste management
Zero waste
References
External links
The EU Pre-waste project website, homepage.
Waste management concepts
Industrial ecology | Waste minimisation | [
"Chemistry",
"Engineering"
] | 2,353 | [
"Industrial ecology",
"Industrial engineering",
"Environmental engineering"
] |
5,150,414 | https://en.wikipedia.org/wiki/Controlled%20waste | Controlled waste is waste that is subject to legislative control in either its handling or its disposal. As a legal term, Controlled waste applies exclusively to the UK but the concept is enshrined in laws of many other countries. The types of waste covered includes domestic, commercial and industrial waste. They are regulated because of their toxicity, their hazardous nature or their capability to do harm to human health or the environment either now or at some time in the future. A prime concern is the effects of biodegradation or biochemical degradation and the by-products produced.
References
See also
List of waste types
Waste management
Waste | Controlled waste | [
"Physics"
] | 125 | [
"Materials",
"Waste",
"Matter"
] |
5,155,983 | https://en.wikipedia.org/wiki/Kinetic%20inductance | Kinetic inductance is the manifestation of the inertial mass of mobile charge carriers in alternating electric fields as an equivalent series inductance. Kinetic inductance is observed in high carrier mobility conductors (e.g. superconductors) and at very high frequencies.
Explanation
A change in electromotive force (emf) will be opposed by the inertia of the charge carriers since, like all objects with mass, they prefer to be traveling at constant velocity and therefore it takes a finite time to accelerate the particle. This is similar to how a change in emf is opposed by the finite rate of change of magnetic flux in an inductor. The resulting phase lag in voltage is identical for both energy storage mechanisms, making them indistinguishable in a normal circuit.
Kinetic inductance () arises naturally in the Drude model of electrical conduction considering not only the DC conductivity but also the finite relaxation time (collision time) of the mobile charge carriers when it is not tiny compared to the wave period 1/f. This model defines a complex conductance at radian frequency ω=2πf given by . The imaginary part, -σ2, represents the kinetic inductance. The Drude complex conductivity can be expanded into its real and imaginary components:
where is the mass of the charge carrier (i.e. the effective electron mass in metallic conductors) and is the carrier number density. In normal metals the collision time is typically s, so for frequencies < 100 GHz is very small and can be ignored; then this equation reduces to the DC conductance . Kinetic inductance is therefore only significant at optical frequencies, and in superconductors whose .
For a superconducting wire of cross-sectional area , the kinetic inductance of a segment of length can be calculated by equating the total kinetic energy of the Cooper pairs in that region with an equivalent inductive energy due to the wire's current :
where is the electron mass ( is the mass of a Cooper pair), is the average Cooper pair velocity, is the density of Cooper pairs, is the length of the wire, is the wire cross-sectional area, and is the current. Using the fact that the current , where is the electron charge, this yields:
The same procedure can be used to calculate the kinetic inductance of a normal (i.e. non-superconducting) wire, except with replaced by , replaced by , and replaced by the normal carrier density . This yields:
The kinetic inductance increases as the carrier density decreases. Physically, this is because a smaller number of carriers must have a proportionally greater velocity than a larger number of carriers in order to produce the same current, whereas their energy increases according to the square of velocity. The resistivity also increases as the carrier density decreases, thereby maintaining a constant ratio (and thus phase angle) between the (kinetic) inductive and resistive components of a wire's impedance for a given frequency. That ratio, , is tiny in normal metals up to terahertz frequencies.
Applications
Kinetic inductance is the principle of operation of the highly sensitive photodetectors known as kinetic inductance detectors (KIDs). The change in the Cooper pair density brought about by the absorption of a single photon in a strip of superconducting material produces a measurable change in its kinetic inductance.
Kinetic inductance is also used in a design parameter for superconducting flux qubits: is the ratio of the kinetic inductance of the Josephson junctions in the qubit to the geometrical inductance of the flux qubit. A design with a low beta behaves more like a simple inductive loop, while a design with a high beta is dominated by the Josephson junctions and has more hysteretic behavior.
See also
Drude model
Electrical conduction
Electron mobility
Superconductivity
References
External links
YouTube video on kinetic inductance from MIT
Electrodynamics
Superconductivity | Kinetic inductance | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 831 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrodynamics",
"Electrical resistance and conductance",
"Dynamical systems"
] |
5,157,393 | https://en.wikipedia.org/wiki/Alkali-metal%20thermal%20to%20electric%20converter | An alkali-metal thermal-to-electric converter (AMTEC, originally called the sodium heat engine or SHE) is a thermally regenerative electrochemical device for the direct conversion of heat to electrical energy. It is characterized by high potential efficiencies and no moving parts except for the working fluid, which make it a candidate for space power applications.
It was invented by Joseph T. Kummer and Neill Weber at Ford in 1966, and is described in US Patents , , and .
Design
An Alkali-metal thermal to electric converter works by pumping something, usually sodium, though any Alkali metal will do, through, around, and over, a circuit. The heat evaporates the sodium at one end. This puts it at high pressure. It then passes through/over the Anode, releasing electrons, thus, charge. It then passes through an electrolyte to conduct it to the other side. This works because the electrolyte chosen can conduct Ions, but not electrons so well. At the Cathode, the Alkali metal gets its electrons back, effectively pumping electrons through the external circuit. The pressure from the electrolyte pushes it to a low-pressure vapor chamber, where it “cools off” to a liquid again. An electromagnetic pump, or a wick, takes this liquid sodium back to the hot side.
This device accepts a heat input in a range 900–1300 K and produces direct current with predicted device efficiencies of 15–40%. In the AMTEC, sodium is driven around a closed thermodynamic cycle between a high-temperature heat reservoir and a cooler reservoir at the heat rejection temperature. The unique feature of the AMTEC cycle is that sodium ion conduction between a high-pressure or -activity region and a low-pressure or -activity region on either side of a highly ionically conducting refractory solid electrolyte is thermodynamically nearly equivalent to an isothermal expansion of sodium vapor between the same high and low pressures. Electrochemical oxidation of neutral sodium at the anode leads to sodium ions, which traverse the solid electrolyte, and electrons, which travel from the anode through an external circuit, where they perform electrical work, to the low-pressure cathode, where they recombine with the ions to produce low-pressure sodium gas. The sodium gas generated at the cathode then travels to a condenser at the heat-rejection temperature of perhaps 400–700 K, where liquid sodium reforms. The AMTEC thus is an electrochemical concentration cell, which converts the work generated by expansion of sodium vapor directly into electric power.
The converter is based on the electrolyte used in the sodium–sulfur battery, sodium beta″-alumina, a crystalline phase of somewhat variable composition containing aluminum oxide, Al2O3, and sodium oxide, Na2O, in a nominal ratio of 5:1, and a small amount of the oxide of a small-cation metal, usually lithium or magnesium, which stabilizes the beta″ crystal structure. The sodium beta″-alumina solid electrolyte (BASE) ceramic is nearly insulating with respect to transport of electrons and is a thermodynamically stable phase in contact with both liquid sodium and sodium at low pressure.
Development
Single-cell AMTECs with open voltages as high as 1.55 V and maximum power density as high as 0.50 W/cm2 of solid electrolyte area at a temperature of 1173 K (900 °C) have been obtained with long-term stable refractory metal electrodes.
Efficiency of AMTEC cells has reached 16% in the laboratory. High-voltage multi-tube modules are predicted to have 20–25% efficiency, and power densities up to 0.2 kW/L appear to be achievable in the near future. Calculations show that replacing sodium with a potassium working fluid increases the peak efficiency from 28% to 31% at 1100 K with a 1 mm thick BASE tube.
Most work on AMTECs has concerned sodium working fluid devices. Potassium AMTECs have been run with potassium beta″-alumina solid electrolyte ceramics and show improved power at lower operating temperatures compared to sodium AMTECs.
A detailed quantitative model of the mass transport and interfacial kinetics behavior of AMTEC electrodes has been developed and used to fit and analyze the performance of a wide variety of electrodes, and to make predictions of the performance of optimized electrodes. The interfacial electrochemical kinetics can be further described quantitatively with a tunneling, diffusion, and desorption model. A reversible thermodynamic cycle for AMTEC shows that it is, at best, slightly less efficient than a Carnot cycle.
A related technology, the Johnson thermoelectric energy converter, uses a similar concept of pumping positive ions through an ion-selective membrane, using hydrogen rather than an alkali metal as the working fluid.
Applications
AMTEC requires energy input at modest elevated temperatures and thus is easily adapted to any heat source, including radioisotope, concentrated solar power, external combustion, or a nuclear reactor. A solar thermal power conversion system based on an AMTEC could have advantages over other technologies for some applications including (thermal energy storage with phase-change material) and power conversion in a compact unit. The overall system could achieve as high as 14 W/kg with present collector technology and future AMTEC conversion efficiencies. The energy storage system has the potential to batteries, and the temperatures at which the system operates allows long life and reduced radiator size (heat-reject temperature of 600 K).
NASA investigated AMTEC conversion as a next-generation radioisotope power source for deep-space applications, but the technology was not selected for the next-generation systems.
While space power systems are of intrinsic interest, terrestrial applications could offer large-scale applications for AMTEC systems. At the 25% efficiency projected for the device and projected costs of 350 USD/kW, AMTEC could prove useful for a very wide variety of distributed generation applications including self-powered fans for high-efficiency furnaces and water heaters and recreational vehicle power supplies.
References
Thermodynamics
Electricity
Nuclear technology
Electrical generators | Alkali-metal thermal to electric converter | [
"Physics",
"Chemistry",
"Mathematics",
"Technology"
] | 1,294 | [
"Electrical generators",
"Machines",
"Nuclear technology",
"Physical systems",
"Thermodynamics",
"Nuclear physics",
"Dynamical systems"
] |
20,335,837 | https://en.wikipedia.org/wiki/Graph%20dynamical%20system | In mathematics, the concept of graph dynamical systems can be used to capture a wide range of processes taking place on graphs or networks. A major theme in the mathematical and computational analysis of GDSs is to relate their structural properties (e.g. the network connectivity) and the global dynamics that result.
The work on GDSs considers finite graphs and finite state spaces. As such, the research typically involves techniques from, e.g., graph theory, combinatorics, algebra, and dynamical systems rather than differential geometry. In principle, one could define and study GDSs over an infinite graph (e.g. cellular automata or probabilistic cellular automata over or interacting particle systems when some randomness is included), as well as GDSs with infinite state space (e.g. as in coupled map lattices); see, for example, Wu. In the following, everything is implicitly assumed to be finite unless stated otherwise.
Formal definition
A graph dynamical system is constructed from the following components:
A finite graph Y with vertex set v[Y] = {1,2, ... , n}. Depending on the context the graph can be directed or undirected.
A state xv for each vertex v of Y taken from a finite set K. The system state is the n-tuple x = (x1, x2, ... , xn), and x[v] is the tuple consisting of the states associated to the vertices in the 1-neighborhood of v in Y (in some fixed order).
A vertex function fv for each vertex v. The vertex function maps the state of vertex v at time t to the vertex state at time t + 1 based on the states associated to the 1-neighborhood of v in Y.
An update scheme specifying the mechanism by which the mapping of individual vertex states is carried out so as to induce a discrete dynamical system with map F: Kn → Kn.
The phase space associated to a dynamical system with map F: Kn → Kn is the finite directed graph with vertex set Kn and directed edges (x, F(x)). The structure of the phase space is governed by the properties of the graph Y, the vertex functions (fi)i, and the update scheme. The research in this area seeks to infer phase space properties based on the structure of the system constituents. The analysis has a local-to-global character.
Generalized cellular automata (GCA)
If, for example, the update scheme consists of applying the vertex functions synchronously one obtains the class of generalized cellular automata (CA). In this case, the global map F: Kn → Kn is given by
This class is referred to as generalized cellular automata since the classical or standard cellular automata are typically defined and studied over regular graphs or grids, and the vertex functions are typically assumed to be identical.
Example: Let Y be the circle graph on vertices {1,2,3,4} with edges {1,2}, {2,3}, {3,4} and {1,4}, denoted Circ4. Let K = {0,1} be the state space for each vertex and use the function nor3 : K3 → K defined by nor3(x,y,z) = (1 + x)(1 + y)(1 + z) with arithmetic modulo 2 for all vertex functions. Then for example the system state (0,1,0,0) is mapped to (0, 0, 0, 1) using a synchronous update. All the transitions are shown in the phase space below.
Sequential dynamical systems (SDS)
If the vertex functions are applied asynchronously in the sequence specified by a word w = (w1, w2, ... , wm) or permutation = ( , ) of v[Y] one obtains the class of Sequential dynamical systems (SDS). In this case it is convenient to introduce the Y-local maps Fi constructed from the vertex functions by
The SDS map F = [FY , w] : Kn → Kn is the function composition
If the update sequence is a permutation one frequently speaks of a permutation SDS to emphasize this point.
Example: Let Y be the circle graph on vertices {1,2,3,4} with edges {1,2}, {2,3}, {3,4} and {1,4}, denoted Circ4. Let K={0,1} be the state space for each vertex and use the function nor3 : K3 → K defined by nor3(x, y, z) = (1 + x)(1 + y)(1 + z) with arithmetic modulo 2 for all vertex functions. Using the update sequence (1,2,3,4) then the system state (0, 1, 0, 0) is mapped to (0, 0, 1, 0). All the system state transitions for this sequential dynamical system are shown in the phase space below.
Stochastic graph dynamical systems
From, e.g., the point of view of applications it is interesting to consider the case where one or more of the components of a GDS contains stochastic elements. Motivating applications could include processes that are not fully understood (e.g. dynamics within a cell) and where certain aspects for all practical purposes seem to behave according to some probability distribution. There are also applications governed by deterministic principles whose description is so complex or unwieldy that it makes sense to consider probabilistic approximations.
Every element of a graph dynamical system can be made stochastic in several ways. For example, in a sequential dynamical system the update sequence can be made stochastic. At each iteration step one may choose the update sequence w at random from a given distribution of update sequences with corresponding probabilities. The matching probability space of update sequences induces a probability space of SDS maps. A natural object to study in this regard is the Markov chain on state space induced by this collection of SDS maps. This case is referred to as update sequence stochastic GDS and is motivated by, e.g., processes where "events" occur at random according to certain rates (e.g. chemical reactions), synchronization in parallel computation/discrete event simulations, and in computational paradigms described later.
This specific example with stochastic update sequence illustrates two general facts for such systems: when passing to a stochastic graph dynamical system one is generally led to (1) a study of Markov chains (with specific structure governed by the constituents of the GDS), and (2) the resulting Markov chains tend to be large having an exponential number of states. A central goal in the study of stochastic GDS is to be able to derive reduced models.
One may also consider the case where the vertex functions are stochastic, i.e., function stochastic GDS. For example, Random Boolean networks are examples of function stochastic GDS using a synchronous update scheme and where the state space is K = {0, 1}. Finite probabilistic cellular automata (PCA) is another example of function stochastic GDS. In principle the class of Interacting particle systems (IPS) covers finite and infinite PCA, but in practice the work on IPS is largely concerned with the infinite case since this allows one to introduce more interesting topologies on state space.
Applications
Graph dynamical systems constitute a natural framework for capturing distributed systems such as biological networks and epidemics over social networks, many of which are frequently referred to as complex systems.
See also
Chemical reaction network theory
Dynamic network analysis (a social science topic)
Finite-state machine
Hopfield network
Petri net
References
Further reading
External links
Graph Dynamical Systems – A Mathematical Framework for Interaction-Based Systems, Their Analysis and Simulations by Henning Mortveit
Dynamical systems
Graph theory
Combinatorics | Graph dynamical system | [
"Physics",
"Mathematics"
] | 1,682 | [
"Discrete mathematics",
"Graph theory",
"Combinatorics",
"Mathematical relations",
"Mechanics",
"Dynamical systems"
] |
20,340,167 | https://en.wikipedia.org/wiki/Advanced%20Thin%20Ionization%20Calorimeter | The Advanced Thin Ionization Calorimeter (ATIC) is a balloon-borne instrument flying in the stratosphere over Antarctica to measure the energy and composition of cosmic rays. ATIC was launched from McMurdo Station for the first time in December 2000 and has since completed three successful flights out of four.
Working principle
The detector uses the principle of ionization calorimetry: several layers of the scintillator bismuth germanate emit light as they are struck by particles, allowing to calculate the particles' energy. A silicon matrix is used to determine the particles' electrical charge.
Collaborators
The project is an international collaboration of researchers from Louisiana State University, University of Maryland, College Park, Marshall Space Flight Center, Purple Mountain Observatory in China, Moscow State University in Russia and Max Planck Institute for Solar System Research in Germany. ATIC is supported in the United States by NASA and flights are conducted under the auspices of the Balloon Program Office at Wallops Flight Facility by the staff of the Columbia Scientific Balloon Facility. Antarctic logistics are provided by the National Science Foundation and its contractor Raytheon Polar Services Corporation.
The principal investigator for ATIC is John Wefel of Louisiana State University.
Results
In November 2008, researchers published in Nature the finding of a surplus of high energy electrons. During a 5-week observatory period in 2000 and 2003, ATIC counted 70 electrons with energies in the range 300–800 GeV; these electrons were in excess of those expected from the galactic background. The source of these electrons is unknown, but it is assumed to be relatively close, no more than about 3000 lightyears away, since high energy electrons rapidly lose energy as they travel through the galactic magnetic field and collide with photons. The electrons could originate from a nearby pulsar or other astrophysical object, but the researchers were not able to identify a fitting object. According to another conjecture, the electrons result from collisions of Dark Matter particles, for example WIMP Kaluza-Klein particles of mass near 620 GeV.
Related data from other experiments
Earlier in the year, the satellite PAMELA had found excess positrons (the antiparticle of the electron) in the cosmic ray signal, also believed to originate from dark matter interactions.
ATIC cannot distinguish between electrons and positrons, so it is possible that the two results are compatible.
On the other hand, in November 2008 the Milagro experiment reported cosmic ray "hotspots" in the sky, possibly supporting astrophysical objects as sources of the surplus electrons. In May 2009, observations by the Fermi space telescope were reported which did not support the spike of high-energy electrons seen by ATIC.
References
External links
ATIC page from the Department of Physics and Astronomy at Louisiana State University
Detailed report on ATIC 1 flight (Antarctica 2000/2001)
Detailed report on ATIC 2 flight (Antarctica 2002/2003)
Detailed report on failed ATIC 3 flight (Antarctica 2005)
Detailed report on ATIC 4 flight (Antarctica 2007/2008)
Cosmic-ray experiments
Science and technology in Antarctica
Balloon-borne experiments
Experiments for dark matter search | Advanced Thin Ionization Calorimeter | [
"Physics"
] | 630 | [
"Dark matter",
"Experiments for dark matter search",
"Unsolved problems in physics"
] |
2,039,133 | https://en.wikipedia.org/wiki/Theoretical%20astronomy | Theoretical astronomy is the use of analytical and computational models based on principles from physics and chemistry to describe and explain astronomical objects and astronomical phenomena. Theorists in astronomy endeavor to create theoretical models and from the results predict observational consequences of those models. The observation of a phenomenon predicted by a model allows astronomers to select between several alternate or conflicting models as the one best able to describe the phenomena.
Ptolemy's Almagest, although a brilliant treatise on theoretical astronomy combined with a practical handbook for computation, nevertheless includes compromises to reconcile discordant observations with a geocentric model. Modern theoretical astronomy is usually assumed to have begun with the work of Johannes Kepler (1571–1630), particularly with Kepler's laws. The history of the descriptive and theoretical aspects of the Solar System mostly spans from the late sixteenth century to the end of the nineteenth century.
Theoretical astronomy is built on the work of observational astronomy, astrometry, astrochemistry, and astrophysics. Astronomy was early to adopt computational techniques to model stellar and galactic formation and celestial mechanics. From the point of view of theoretical astronomy, not only must the mathematical expression be reasonably accurate but it should preferably exist in a form which is amenable to further mathematical analysis when used in specific problems. Most of theoretical astronomy uses Newtonian theory of gravitation, considering that the effects of general relativity are weak for most celestial objects. Theoretical astronomy does not attempt to predict the position, size and temperature of every object in the universe, but by and large has concentrated upon analyzing the apparently complex but periodic motions of celestial objects.
Integrating astronomy and physics
"Contrary to the belief generally held by laboratory physicists, astronomy has contributed to the growth of our understanding of physics." Physics has helped in the elucidation of astronomical phenomena, and astronomy has helped in the elucidation of physical phenomena:
discovery of the law of gravitation came from the information provided by the motion of the Moon and the planets,
viability of nuclear fusion as demonstrated in the Sun and stars and yet to be reproduced on earth in a controlled form.
Integrating astronomy with physics involves:
The aim of astronomy is to understand the physics and chemistry from the laboratory that is behind cosmic events so as to enrich our understanding of the cosmos and of these sciences as well.
Integrating astronomy and chemistry
Astrochemistry, the overlap of the disciplines of astronomy and chemistry, is the study of the abundance and reactions of chemical elements and molecules in space, and their interaction with radiation. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds, is of special interest because it is from these clouds that solar systems form.
Infrared astronomy, for example, has revealed that the interstellar medium contains a suite of complex gas-phase carbon compounds called aromatic hydrocarbons, often abbreviated (PAHs or PACs). These molecules composed primarily of fused rings of carbon (either neutral or in an ionized state) are said to be the most common class of carbon compound in the galaxy. They are also the most common class of carbon molecule in meteorites and in cometary and asteroidal dust (cosmic dust). These compounds, as well as the amino acids, nucleobases, and many other compounds in meteorites, carry deuterium (2H) and isotopes of carbon, nitrogen, and oxygen that are very rare on earth, attesting to their extraterrestrial origin. The PAHs are thought to form in hot circumstellar environments (around dying carbon rich red giant stars).
The sparseness of interstellar and interplanetary space results in some unusual chemistry, since symmetry-forbidden reactions cannot occur except on the longest of timescales. For this reason, molecules and molecular ions which are unstable on earth can be highly abundant in space, for example the H3+ ion. Astrochemistry overlaps with astrophysics and nuclear physics in characterizing the nuclear reactions which occur in stars, the consequences for stellar evolution, as well as stellar 'generations'. Indeed, the nuclear reactions in stars produce every naturally occurring chemical element. As the stellar 'generations' advance, the mass of the newly formed elements increases. A first-generation star uses elemental hydrogen (H) as a fuel source and produces helium (He). Hydrogen is the most abundant element, and it is the basic building block for all other elements as its nucleus has only one proton. Gravitational pull toward the center of a star creates massive amounts of heat and pressure, which cause nuclear fusion. Through this process of merging nuclear mass, heavier elements are formed. Lithium, carbon, nitrogen and oxygen are examples of elements that form in stellar fusion. After many stellar generations, very heavy elements are formed (e.g. iron and lead).
Tools of theoretical astronomy
Theoretical astronomers use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations. Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen.
Astronomy theorists endeavor to create theoretical models and figure out the observational consequences of those models. This helps observers look for data that can refute a model or help in choosing between several alternate or conflicting models.
Theorists also try to generate or modify models to take into account new data. Consistent with the general scientific approach, in the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.
Topics of theoretical astronomy
Topics studied by theoretical astronomers include:
stellar dynamics and evolution;
galaxy formation;
large-scale structure of matter in the Universe;
origin of cosmic rays;
general relativity and physical cosmology, including string cosmology and astroparticle physics.
Astrophysical relativity serves as a tool to gauge the properties of large scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole physics and the study of gravitational waves.
Astronomical models
Some widely accepted and studied theories and models in astronomy, now included in the Lambda-CDM model are the Big Bang, Cosmic inflation, dark matter, and fundamental theories of physics.
A few examples of this process:
Leading topics in theoretical astronomy
Dark matter and dark energy are the current leading topics in astronomy, as their discovery and controversy originated during the study of the galaxies.
Theoretical astrophysics
Of the topics approached with the tools of theoretical physics, particular consideration is often given to stellar photospheres, stellar atmospheres, the solar atmosphere, planetary atmospheres, gaseous nebulae, nonstationary stars, and the interstellar medium. Special attention is given to the internal structure of stars.
Weak equivalence principle
The observation of a neutrino burst within 3 h of the associated optical burst from Supernova 1987A in the Large Magellanic Cloud (LMC) gave theoretical astrophysicists an opportunity to test that neutrinos and photons follow the same trajectories in the gravitational field of the galaxy.
Thermodynamics for stationary black holes
A general form of the first law of thermodynamics for stationary black holes can be derived from the microcanonical functional integral for the gravitational field. The boundary data
the gravitational field as described with a microcanonical system in a spatially finite region and
the density of states expressed formally as a functional integral over Lorentzian metrics and as a functional of the geometrical boundary data that are fixed in the corresponding action,
are the thermodynamical extensive variables, including the energy and angular momentum of the system. For the simpler case of nonrelativistic mechanics as is often observed in astrophysical phenomena associated with a black hole event horizon, the density of states can be expressed as a real-time functional integral and subsequently used to deduce Feynman's imaginary-time functional integral for the canonical partition function.
Theoretical astrochemistry
Reaction equations and large reaction networks are an important tool in theoretical astrochemistry, especially as applied to the gas-grain chemistry of the interstellar medium. Theoretical astrochemistry offers the prospect of being able to place constraints on the inventory of organics for exogenous delivery to the early Earth.
Interstellar organics
"An important goal for theoretical astrochemistry is to elucidate which organics are of true interstellar origin, and to identify possible interstellar precursors and reaction pathways for those molecules which are the result of aqueous alterations." One of the ways this goal can be achieved is through the study of carbonaceous material as found in some meteorites. Carbonaceous chondrites (such as C1 and C2) include organic compounds such as amines and amides; alcohols, aldehydes, and ketones; aliphatic and aromatic hydrocarbons; sulfonic and phosphonic acids; amino, hydroxycarboxylic, and carboxylic acids; purines and pyrimidines; and kerogen-type material. The organic inventories of primitive meteorites display large and variable enrichments in deuterium, carbon-13 (13C), and nitrogen-15 (15N), which is indicative of their retention of an interstellar heritage.
Chemistry in cometary comae
The chemical composition of comets should reflect both the conditions in the outer solar nebula some 4.5 × 109 ayr, and the nature of the natal interstellar cloud from which the Solar System was formed. While comets retain a strong signature of their ultimate interstellar origins, significant processing must have occurred in the protosolar nebula. Early models of coma chemistry showed that reactions can occur rapidly in the inner coma, where the most important reactions are proton transfer reactions. Such reactions can potentially cycle deuterium between the different coma molecules, altering the initial D/H ratios released from the nuclear ice, and necessitating the construction of accurate models of cometary deuterium chemistry, so that gas-phase coma observations can be safely extrapolated to give nuclear D/H ratios.
Theoretical chemical astronomy
While the lines of conceptual understanding between theoretical astrochemistry and theoretical chemical astronomy often become blurred so that the goals and tools are the same, there are subtle differences between the two sciences. Theoretical chemistry as applied to astronomy seeks to find new ways to observe chemicals in celestial objects, for example. This often leads to theoretical astrochemistry having to seek new ways to describe or explain those same observations.
Astronomical spectroscopy
The new era of chemical astronomy had to await the clear enunciation of the chemical principles of spectroscopy and the applicable theory.
Chemistry of dust condensation
Supernova radioactivity dominates light curves and the chemistry of dust condensation is also dominated by radioactivity. Dust is usually either carbon or oxides depending on which is more abundant, but Compton electrons dissociate the CO molecule in about one month. The new chemical astronomy of supernova solids depends on the supernova radioactivity:
the radiogenesis of 44Ca from 44Ti decay after carbon condensation establishes their supernova source,
their opacity suffices to shift emission lines blueward after 500 d and emits significant infrared luminosity,
parallel kinetic rates determine trace isotopes in meteoritic supernova graphites,
the chemistry is kinetic rather than due to thermal equilibrium and
is made possible by radiodeactivation of the CO trap for carbon.
Theoretical physical astronomy
Like theoretical chemical astronomy, the lines of conceptual understanding between theoretical astrophysics and theoretical physical astronomy are often blurred, but, again, there are subtle differences between these two sciences. Theoretical physics as applied to astronomy seeks to find new ways to observe physical phenomena in celestial objects and what to look for, for example. This often leads to theoretical astrophysics having to seek new ways to describe or explain those same observations, with hopefully a convergence to improve our understanding of the local environment of Earth and the physical Universe.
Weak interaction and nuclear double beta decay
Nuclear matrix elements of relevant operators as extracted from data and from a shell-model and theoretical approximations both for the two-neutrino and neutrinoless modes of decay are used to explain the weak interaction and nuclear structure aspects of nuclear double beta decay.
Neutron-rich isotopes
New neutron-rich isotopes, 34Ne, 37Na, and 43Si have been produced unambiguously for the first time, and convincing evidence for the particle instability of three others, 33Ne, 36Na, and 39Mg has been obtained. These experimental findings compare with recent theoretical predictions.
Theory of astronomical time keeping
Until recently all the time units that appear natural to us are caused by astronomical phenomena:
Earth's orbit around the Sun => the year, and the seasons,
Moon's orbit around the Earth => the month,
Earth's rotation and the succession of brightness and darkness => the day (and night).
High precision appears problematic:
ambiguities arise in the exact definition of a rotation or revolution,
some astronomical processes are uneven and irregular, such as the noncommensurability of year, month, and day,
there are a multitude of time scales and calendars to solve the first two problems.
Some of these time standard scales are sidereal time, solar time, and universal time.
Atomic time
From the Systeme Internationale (SI) comes the second as defined by the duration of 9 192 631 770 cycles of a particular hyperfine structure transition in the ground state of caesium-133 (133Cs). For practical usability a device is required that attempts to produce the SI second (s) such as an atomic clock. But not all such clocks agree. The weighted mean of many clocks distributed over the whole Earth defines the Temps Atomique International; i.e., the Atomic Time TAI. From the General theory of relativity the time measured depends on the altitude on earth and the spatial velocity of the clock so that TAI refers to a location on sea level that rotates with the Earth.
Ephemeris time
Since the Earth's rotation is irregular, any time scale derived from it such as Greenwich Mean Time led to recurring problems in predicting the Ephemerides for the positions of the Moon, Sun, planets and their natural satellites. In 1976 the International Astronomical Union (IAU) resolved that the theoretical basis for ephemeris time (ET) was wholly non-relativistic, and therefore, beginning in 1984 ephemeris time would be replaced by two further time scales with allowance for relativistic corrections. Their names, assigned in 1979, emphasized their dynamical nature or origin, Barycentric Dynamical Time (TDB) and Terrestrial Dynamical Time (TDT). Both were defined for continuity with ET and were based on what had become the standard SI second, which in turn had been derived from the measured second of ET.
During the period 1991–2006, the TDB and TDT time scales were both redefined and replaced, owing to difficulties or inconsistencies in their original definitions. The current fundamental relativistic time scales are Geocentric Coordinate Time (TCG) and Barycentric Coordinate Time (TCB). Both of these have rates that are based on the SI second in respective reference frames (and hypothetically outside the relevant gravity well), but due to relativistic effects, their rates would appear slightly faster when observed at the Earth's surface, and therefore diverge from local Earth-based time scales using the SI second at the Earth's surface.
The currently defined IAU time scales also include Terrestrial Time (TT) (replacing TDT, and now defined as a re-scaling of TCG, chosen to give TT a rate that matches the SI second when observed at the Earth's surface), and a redefined Barycentric Dynamical Time (TDB), a re-scaling of TCB to give TDB a rate that matches the SI second at the Earth's surface.
Extraterrestrial time-keeping
Stellar dynamical time scale
For a star, the dynamical time scale is defined as the time that would be taken for a test particle released at the surface to fall under the star's potential to the centre point, if pressure forces were negligible. In other words, the dynamical time scale measures the amount of time it would take a certain star to collapse in the absence of any internal pressure. By appropriate manipulation of the equations of stellar structure this can be found to be
where R is the radius of the star, G is the gravitational constant, M is the mass of the star, ρ the star gas density (assumed constant here) and v is the escape velocity. As an example, the Sun dynamical time scale is approximately 1133 seconds. Note that the actual time it would take a star like the Sun to collapse is greater because internal pressure is present.
The 'fundamental' oscillatory mode of a star will be at approximately the dynamical time scale. Oscillations at this frequency are seen in Cepheid variables.
Theory of astronomical navigation
On Earth
The basic characteristics of applied astronomical navigation are
usable in all areas of sailing around the Earth,
applicable autonomously (does not depend on others – persons or states) and passively (does not emit energy),
conditional usage via optical visibility (of horizon and celestial bodies), or state of cloudiness,
precisional measurement, sextant is 0.1', altitude and position is between 1.5' and 3.0'.
temporal determination takes a couple of minutes (using the most modern equipment) and ≤ 30 min (using classical equipment).
The superiority of satellite navigation systems to astronomical navigation are currently undeniable, especially with the development and use of GPS/NAVSTAR. This global satellite system
enables automated three-dimensional positioning at any moment,
automatically determines position continuously (every second or even more often),
determines position independent of weather conditions (visibility and cloudiness),
determines position in real time to a few meters (two carrying frequencies) and 100 m (modest commercial receivers), which is two to three orders of magnitude better than by astronomical observation,
is simple even without expert knowledge,
is relatively cheap, comparable to equipment for astronomical navigation, and
allows incorporation into integrated and automated systems of control and ship steering. The use of astronomical or celestial navigation is disappearing from the surface and beneath or above the surface of the Earth.
Geodetic astronomy is the application of astronomical methods into networks and technical projects of geodesy for
apparent places of stars, and their proper motions
precise astronomical navigation
astro-geodetic geoid determination and
modelling the rock densities of the topography and of geological layers in the subsurface
Satellite geodesy using the stellar background (see also astrometry and cosmic triangulation)
Monitoring of the Earth rotation and polar wandering
Contribution to the time system of physics and geosciences
Astronomical algorithms are the algorithms used to calculate ephemerides, calendars, and positions (as in celestial navigation or satellite navigation).
Many astronomical and navigational computations use the Figure of the Earth as a surface representing the Earth.
The International Earth Rotation and Reference Systems Service (IERS), formerly the International Earth Rotation Service, is the body responsible for maintaining global time and reference frame standards, notably through its Earth Orientation Parameter (EOP) and International Celestial Reference System (ICRS) groups.
Deep space
The Deep Space Network, or DSN, is an international network of large antennas and communication facilities that supports interplanetary spacecraft missions, and radio and radar astronomy observations for the exploration of the Solar System and the universe. The network also supports selected Earth-orbiting missions. DSN is part of the NASA Jet Propulsion Laboratory (JPL).
Aboard an exploratory vehicle
An observer becomes a deep space explorer upon escaping Earth's orbit. While the Deep Space Network maintains communication and enables data download from an exploratory vessel, any local probing performed by sensors or active systems aboard usually require astronomical navigation, since the enclosing network of satellites to ensure accurate positioning is absent.
See also
Astrochemistry
Astrometry
Astrophysics
Celestial mechanics
Celestial navigation
Celestial sphere
Orbital mechanics
References
External links
Introduction to Cataclysmic Variables (CVs)
L. Sidoli, 2008 Transient outburst mechanisms
Commentary on "The Compendium of Plain Astronomy" is a manuscript from 1665 about theoretical astronomy
Applied and interdisciplinary physics
Astrometry
Astronomical imaging
Astronomical sub-disciplines
Astronomical coordinate systems
Observational astronomy
Space science
Stellar astronomy | Theoretical astronomy | [
"Physics",
"Astronomy",
"Mathematics"
] | 4,211 | [
"Applied and interdisciplinary physics",
"Outer space",
"Observational astronomy",
"Space science",
"Astrometry",
"Astronomical coordinate systems",
"Coordinate systems",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
2,039,517 | https://en.wikipedia.org/wiki/Niello | Niello is a black mixture, usually of sulphur, copper, silver, and lead, used as an inlay on engraved or etched metal, especially silver. It is added as a powder or paste, then fired until it melts or at least softens, and flows or is pushed into the engraved lines in the metal. It hardens and blackens when cool, and the niello on the flat surface is polished off to show the filled lines in black, contrasting with the polished metal around it. It may also be used with other metalworking techniques to cover larger areas, as seen in the sky in the diptych illustrated here. The metal where niello is to be placed is often roughened to provide a key. In many cases, especially in objects that have been buried underground, where the niello is now lost, the roughened surface indicates that it was once there.
Statistical consideration
Niello was used on a variety of objects including sword hilts, chalices, plates, horns, adornment for horses, jewellery such as bracelets, rings, pendants, and small fittings such as strap-ends, purse-bars, buttons, belt buckles and the like. It was also used to fill in the letters in inscriptions engraved on metal. Periods when engraving filled in with niello has been used to make full images with figures have been relatively few, but include some significant achievements. In ornament, it came to have competition from enamel, with far wider colour possibilities, which eventually displaced it in most of Europe.
The name derives from the Latin nigellum for the substance, or nigello or neelo, the medieval Latin for black. Though historically most common in Europe, it is also known from many parts of Asia and the Near East.
Bronze Age
There are a number of claimed uses of niello from the Mediterranean Bronze Age, all of which have been the subjects of disputes as to the actual composition of the materials used, that have not been conclusively settled, despite some decades of debate. The earliest claimed use of niello appears in late Bronze Age Byblos in Syria, around 1800 BC, in inscriptions in hieroglyphs on scimitars. In Ancient Egypt it appears a little later, in the tomb of Queen Ahhotep II, who lived about 1550 BC, on a dagger decorated with a lion chasing a calf in a rocky landscape in a style that shows Greek influence, or at least similarity to the roughly contemporary daggers from Mycenae, and perhaps other objects in the tomb.
History
At about the same time of c.1550 BC it appears on several bronze daggers from shaft grave royal tombs at Mycenae (in Grave Circle A and Grave Circle B), especially in long thin scenes running along the centre of the blade. These show the violence typical of the art of Mycenaean Greece, as well as a sophistication in both technique and figurative imagery that is startlingly original in a Greek context. There are a number of scenes of lions hunting and being hunted, attacking men and being attacked; most are now in the National Archaeological Museum, Athens.
These are in a mixed-media technique often called metalmalerei (German: "painting in metal"), which involves using gold and silver inlays or applied foils with black niello and the bronze, which would originally have been brightly polished. As well as providing a black colour, the niello was also used as the adhesive to hold the thin gold and silver foils in place.
Byblos in Syria, where niello first appears, was something of an Egyptian outpost on the Levant, and many scholars think that it was highly-skilled metalworkers from Syria who introduced the technique to both Egypt and Mycenaean Greece. The iconography can most easily be explained by some combination of influence from the broader traditions of Mesopotamian art where somewhat comparable imagery had been produced for over a thousand years in cylinder seals and the like, and some (such as the physique of the figures) from Minoan art, although no early niello has been found on Crete.
A decorated metal cup, the "Enkomi Cup" from Cyprus has also been claimed to use niello decoration. However, controversy has continued since the 1960s as to whether the material used on all these pieces actually is niello, and a succession of increasingly sophisticated scientific tests have failed to provide evidence of the presence of the sulphurous compounds which define niello. It has been suggested that these artefacts, or at least the daggers, use in fact a technique of patinated metal that may be the same as the Corinthian bronze known from ancient literature, and is similar to the Japanese Shakudō.
Persia
The Sassanid Persians enjoyed dining and drinking together, a social event that is visible through ceramic, glass, and silver vessels. Elite circles handled silver cups, plates, and bowls on which artisans hammered and chased intricate designs.
Sasanian niello is a decorative technique used in metalworking during the Sasanian Empire (224-651 AD). This technique was particularly popular in Sasanian silverwork, adorning objects such as plates, bowls, ewers, and jewelry. The designs often featured scenes of hunting, courtly life, animals, and mythical creatures.
Sasanian niello is notable for its fine craftsmanship and the skillful use of negative space to create detailed imagery. But in general, Niello was rarely used in Sasanian metalwork, which could use it inventively. The Metropolitan Museum of Art has Sasanian shallow bowls or dishes where in one case it forms the stripes on a tiger, and in another the horns and hoofs of goats in relief, as well as parts of the king's weapons. This relief use of niello seems to be paralleled from this period in only one piece of Byzantine silver.
A silver oval bowl decorated with tigers and grapevines, attributed to the Sasanian period of Iran (3rd-7th centuries CE) and held in the Metropolitan Museum of Art's Department of Ancient Near Eastern Art, was examined using non-invasive analytical techniques to identify the composition of the silver alloy and the niello inlay used in its decoration. The study revealed that the bowl is made of a silver-copper alloy containing approximately 3 wt.% copper. The niello inlays were found to consist solely of silver sulfide (acanthite). This composition closely resembles that of early Roman niello inlays, suggesting a possible technological link between Roman and Sasanian metalworkers during this period.
Roman, Byzantine and medieval
Niello is then hardly found until the Roman period; or perhaps it first appears around this point. Pliny the Elder (AD 23–79) describes the technique as Egyptian, and remarks the oddness of decorating silver in this way. Some of the earliest uses, from 1–300 AD, seem to be small statuettes and brooches of big cats, where niello is used for the stripes of tigers and the spots on panthers; these were very common in Roman art, as creatures of Bacchus. The animal repertoire of Roman Britain was somewhat different, and provides brooches with niello stripes on a hare and a cat. From about the 4th century, it was used for ornamental details such as borders and for inscriptions in late Roman silver, such as a dish and bowl in the Mildenhall Treasure and pieces in the Hoxne Hoard, including Christian church plate. It was often used on spoons, which were often inscribed with the owner's name, or later crosses. This type of use continued in Byzantine metalwork, from where it passed to Russia.
It is very common in Anglo-Saxon metalwork, with examples including the Tassilo Chalice, Strickland Brooch, and the Fuller Brooch, generally forming the background for motifs carried in the metal, but also used for rather crude geometric decoration of spots, triangles and stripes on small relatively everyday fittings such as strap-ends in base metal. There is similar use in Celtic, Viking, and other types of Early Medieval jewellery and metalwork, especially in northern Europe. Similar uses continued in the traditional styles of jewellery of the Middle East until at least the 20th century. The Late Roman buckle from Gaul illustrated here shows a relatively high quality early example of this sort of decoration.
In Romanesque art colourful champlevé enamel largely replaced it, although it continued to be used for small highlights of ornament, and some high quality Mosan art began to use it for small figurative images as part of large pieces, very often applied as plaques. These began to exploit the possibilities of niello for carrying a precise graphic style. The back of the Ottonian Imperial Cross (1020s) has outline engravings of figures filled with niello, the black lines forming the figures on a gold background. Later Romanesque pieces began to use a more densely engraved style, where the figures are mostly carried by the polished metal, against a black background. Romanesque champlevé enamel was applied to a cheap copper or copper alloy form, which was a great advantage, but for some pieces the prestige of precious metal was desired, and a small number of nielloed silver pieces from c. 1175–1200 adopt the ornamental vocabulary developed in Limoges enamel.
A group of high-quality pieces apparently originating in the Rhineland, which use both niello and enamel, include what may be the earliest reliquary with scenes of the murder and burial of Thomas Becket, probably from a few years after his death in 1170 (The Cloisters). Eight large nielloed plaques decorate the sides and roof, six with figures seen close-up at less than half-length, in a very different style from the cruder full-length figures in the many Limoges enamel equivalent reliquaries.
Gothic art from the 13th century continued to develop this pictorial use of niello, which reached its high point in the Renaissance. Niello continued to be widely used for simple ornament on small pieces, though at the top end goldsmiths were more likely to use black enamel to fill inscriptions on rings and the like. Niello was also used on plate armour, in this case over etched steel, as well as weapons.
Renaissance niello
Some Renaissance goldsmiths in Europe, such as Maso Finiguerra and Antonio del Pollaiuolo in Florence, decorated their works, usually in silver, by engraving the metal with a burin, after which they filled up the hollows produced by the burin with a black enamel-like compound made of silver, lead and sulphur. The resulting design, called a niello, was of much higher contrast and thus much more visible. Sometimes niello decoration was incidental to the objects, but some pieces such as paxes were effectively pictures in niello. A range of religious objects such as crucifixes and reliquaries might be decorated in this way, as well as secular objects such as knife handles, rings and other jewellery, and fittings such as buckles. It appears that niello-work was probably a specialist activity of some goldsmiths, not practiced by others, and most work came from Florence or Bologna.
Niellists were important in the history of art because they had developed skills and techniques that transferred easily to engraving plates for printmaking on paper, and nearly all the earliest engravers were trained as goldsmiths, enabling the new art medium to develop very quickly. At least in Italy, some of the very earliest engraved prints were in fact made by treating a silver object intended for niello as a printing plate with ink, before the niello was added. These are known as "niello prints", or in the cautious words of modern curators, "printed from a plate engraved in the niello manner"; in later centuries, after a collector's market grew up, many were forgeries. The genuine Renaissance prints were probably made mainly as a record of his work by the goldsmith, and perhaps as independent art objects.
By the late 16th century relatively little use was made of niello, especially to create pictures, and a different type of mastic that could be used in much the same way for contrasts in decoration was devised, so European pictorial use was largely restricted to Russia, except for some watches, guns, instruments and the like. Niello has continued to be used sometimes by Western jewellers.
Kievan Rus and Russia
During the 10th to 13th century AD, Kievan Rus craftsmen possessed a high degree of skill in jewellery making. John Tsetses, a 12th-century Byzantine writer, praised the work of Kievan Rus artisans and likened their work to the creations of Daedalus, the highly skilled craftsman of Greek mythology.
The Kievan Rus technique for niello application was first shaping silver or gold by repoussé work, embossing, and casting. They would raise objects in high relief and fill the background with niello using a mixture of red copper, lead, silver, potash, borax, sulphur which was liquefied and poured into concave surfaces before being fired in a furnace. The heat of the furnace would blacken the niello and make the other ornamentation stand out more vividly.
Nielloed items were mass-produced using moulds that still survive today and were traded with Greeks, the Byzantine Empire, and other peoples that traded along the trade route from the Varangians to the Greeks.
During the Mongol invasion from 1237 to 1240 AD, nearly all of Kievan Rus was overrun. Settlements and workshops were burned and razed and most of the craftsmen and artisans were killed. Afterwards, skill in niello and cloisonné enamel diminished greatly. The Ukrainian Museum of Historic Treasures, located in Kiev, has a large collection of nielloed items mostly recovered from tombs found throughout Ukraine.
Later, Veliky Ustyug in North Russia, Tula and Moscow produced high quality pictorial niello pieces such as snuff boxes in contemporary styles such as Rococo and Neoclassicism in the late 18th and early 19th centuries; by then Russia was virtually the only part of Europe regularly using niello in fashionable styles.
Islamic world
In the early Islamic world silver, though continuing in use for vessels at the courts of princes, was much less widely used by the merely wealthy. Instead, vessels of the copper alloys bronze and brass included inlays of silver and gold in their often elaborate decoration, leaving less of a place for niello. Other black fillings were also used, and museum descriptions are often vague about the actual substances involved.
The famous "Baptistère de Saint Louis", c. 1300, a Mamluk basin of engraved brass with gold, silver and niello inlay, which has been in France since at least 1440 (Louis XIII of France and perhaps other kings were baptized in it; now Louvre), is one example where niello is used. Here niello is the background to the figures and the arabesque ornament around them, and used to fill the lines in both.
It is used on the locking bars of some ivory boxes and caskets, and perhaps continued more widely in use on weapons, where it is certainly found in later centuries from which more material survives. It is common in the decoration of the scabbards and hilts of the large daggers called khanjali and qama traditionally carried by all males in the Caucasus region (whether Muslim or Christian). It was also used to decorate handguns when they came into use. Until modern times relatively simple niello was common on the jewellery of the Levant, used in much the same way as in medieval Europe.
Thai jewellery
Nielloware jewellery and related items from Thailand were popular gifts from American soldiers taking "R&R" in Thailand to their girlfriends/wives back home from the 1930s to the 1970s. Most of it was completely handmade jewellery.
The technique is as follows: the artisan would carve a design into the silver, leaving the figure raised by carving out the "background". He would then use the niello inlay to fill in the "background". After being baked in an open fire, the alloy would harden. It would then be sanded smooth and buffed. Finally, a silver artisan would add minute details by hand. Filigree was often used for additional ornamentation. Nielloware is classified as only being black and silver coloured. Other coloured jewellery originating during this time uses a different technique and is not considered niello.
Many of the characters shown in nielloware are characters originally found in the Hindu legend Ramayana. The Thai version is called Ramakien. Important Thai cultural symbols were also frequently used.
Ingredients and technique
Various slightly different recipes are found by modern scientific analysis, and historic accounts. In early periods, niello seems to have been made with a single sulphide, that of the main metal of the piece, even if it was gold (which would be difficult to handle). Copper sulphide niello has only been found on Roman pieces, and silver sulphide is used on silver. Later a mixture of metals was used; Pliny gives a mixed sulphide recipe with silver and copper, but seems to have been some centuries ahead of his time, as such mixtures have not been identified by analysis on pre-medieval pieces. Most Byzantine and early medieval pieces analysed are silver-copper, while silver-copper-lead pieces appear from about the 11th century onwards.
The Mappae clavicula of about the 9th century, Theophilus Presbyter (1070–1125) and Benvenuto Cellini (1500–1571) give detailed accounts, using silver-copper-lead mixtures with slightly different ratios of ingredients, Cellini using more lead. Typical ingredients have been described as: "sulfur with several metallic ingredients and borax"; "copper, silver, and lead, to which had been added sulphur while the metal was in fluid form ... [the design] was then brushed over with a solution of borax..."
While some recipes talk of using furnaces and muffles to melt the niello, others just seem to use an open fire. The necessary temperatures vary with the mixture; overall silver-copper-lead mixtures are easier to use. All mixtures have the same black appearance after work is completed.
See also
Damascening
Yemenite silversmithing (carries a full description on how niello was applied to jewellery in Yemen)
Kubachi silver
Notes
References
Craddock, P. T., "Metal" V. 4, Grove Art Online, Oxford Art Online. Oxford University Press. Web. 1 Oct. 2017, Subscription required
Craddock, Paul and Giumlia-Mair, Allessandra, "Hsmn-Km, Corinthian bronze, Shakudo: black patinated bronze in the ancient world", Chapter 9 in Metal Plating and Patination: Cultural, technical and historical developments, Ed. Susan La-Niece and Craddock, P. T., 2013, Elsevier, , 9781483292069, google books
Dickinson, Oliver et al., The Aegean Bronze Age, 1994, Cambridge University Press, , 9780521456647, The Aegean Bronze Age
Ganina, O. (1974), The Kiev museum of historic treasures (A. Bilenko, Trans.). Kiev, Ukraine: Mistetstvo Publishers
Johns, Catherine, The Jewellery of Roman Britain: Celtic and Classical Traditions, 1996, Psychology Press, , 9781857285666, google books
Landau, David, and Parshall, Peter. The Renaissance Print, Yale, 1996,
Levinson Jay A. (ed.), Early Italian Engravings from the National Gallery of Art, National Gallery of Art, Washington (Catalogue), 1973, LOC 7379624
Maryon, Herbert, Metalwork and Enamelling, 1971 (5th ed.). Dover, New York, , google books
Lucas A and Harris J. Ancient Egyptian Materials and Industries, 2012 (reprint, 1st edn 1926), Courier Corporation, , 9780486144948, google books
"Newman": R. Newman, J. R. Dennis, & E. Farrell, "a Technical Note on Niello", Journal of the American Institute for Conservation, 1982, Volume 21, Number 2, Article 6 (pp. 80 to 85), online text
Osborne, Harold (ed), "Niello", in The Oxford Companion to the Decorative Arts, 1975, OUP,
Smith, W. Stevenson, and Simpson, William Kelly. The Art and Architecture of Ancient Egypt, 3rd edn. 1998, Yale University Press (Penguin/Yale History of Art),
Thomas, Nancy R., "The Early Mycenaean Lion up to Date", pp. 189–191, in Charis: Essays in Honor of Sara A. Immerwahr, Hesperia (Princeton, N.J.) 33, 2004, ASCSA, , 9780876615331, google books
Zarnecki, George and others; English Romanesque Art, 1066–1200, 1984, Arts Council of Great Britain,
Further reading
Dittell, C. (2012), Overview of Siam Sterling Nielloware, Tampa, FL (or Survey of Siam Sterling Nielloware, (E-Book), Bookbaby Publishers)
Giumlia-Mair, A. 2012. "The Enkomi Cup: Niello versus Kuwano", in V. Kassianidou & G. Papasavvas (eds.) Eastern Mediterranean Metallurgy and Metalwork in the Second Millennium BC. A Conference in Honour of James D. Muhly, Nicosia, 10–11 October 2009, 107–116. Oxford & Oakville: Oxbow Books.
Northover P. and La Niece S., "New Thoughts on Niello", in From Mine to Microscope: Advances in the Study of Ancient Technology, eds. Ian Freestone, Thilo Rehren, Shortland, Andrew J., 2009, Oxbow Books, , 9781782972778, google books
Oddy, W., Bimson, M., & La Niece, S. (1983). "The Composition of Niello Decoration on Gold, Silver and Bronze in the Antique and Mediaeval Periods". Studies in Conservation, 28(1), 29–35. doi:10.2307/1506104, JSTOR
External links
E.Brepohls article on niello work
Alloys
Jewellery
Silver
Metalworking | Niello | [
"Chemistry"
] | 4,666 | [
"Chemical mixtures",
"Alloys"
] |
2,039,690 | https://en.wikipedia.org/wiki/Isotopes%20of%20hydrogen | Hydrogen (H) has three naturally occurring isotopes: H, H, and H. H and H are stable, while H has a half-life of years. Heavier isotopes also exist; all are synthetic and have a half-life of less than 1 zeptosecond (10 s).
Of these, H is the least stable, while H is the most.
Hydrogen is the only element whose isotopes have different names that remain in common use today: H is deuterium and H is tritium. The symbols D and T are sometimes used for deuterium and tritium; IUPAC (International Union of Pure and Applied Chemistry) accepts said symbols, but recommends the standard isotopic symbols H and H, to avoid confusion in alphabetic sorting of chemical formulas. H, with no neutrons, may be called protium to disambiguate. (During the early study of radioactivity, some other heavy radioisotopes were given names, but such names are rarely used today.)
List of isotopes
Note: "y" means year, but "ys" means yoctosecond (10 second).
|-
| H
| 1
| 0
|
| colspan=3 align=center|Stable
| 1/2+
| colspan="2" style="text-align:center" | [, ]
| Protium
|-
| H (D)
| 1
| 1
|
| colspan=3 align=center |Stable
| 1+
| colspan="2" style="text-align:center" | [, ]
| Deuterium
|-
| H (T)
| 1
| 2
|
|
| β
| He
| 1/2+
| Trace
|
| Tritium
|-
| H
| 1
| 3
|
|
| n
| H
| 2−
|
|
|-
| H
| 1
| 4
|
|
| 2n
| H
| (1/2+)
|
|
|-
| H
| 1
| 5
|
|
|
|
| 2−#
|
|
|-
| H
| 1
| 6
| #
|
|
|
| 1/2+#
|
|
Hydrogen-1 (protium)
H (atomic mass ) is the most common hydrogen isotope, with an abundance of >99.98%. Its nucleus consists of only a single proton, so it has the formal name protium.
The proton has never been observed to decay, so H is considered stable. Some Grand Unified Theories proposed in the 1970s predict that proton decay can occur with a half-life between and years. If so, then H (and all nuclei now believed to be stable) are only observationally stable. As of 2018, experiments have shown that the mean lifetime of the proton is > years.
Hydrogen-2 (deuterium)
Deuterium, H (atomic mass ), the other stable hydrogen isotope, has one proton and one neutron in its nucleus, called a deuteron. H comprises 26–184 ppm (by population, not mass) of hydrogen on Earth; the lower number tends to be found in hydrogen gas and higher enrichment (150 ppm) is typical of seawater. Deuterium on Earth has been enriched with respect to its initial concentration in the Big Bang and outer solar system (≈27 ppm, atom fraction) and older parts of the Milky Way (≈23 ppm). Presumably the differential concentration of deuterium in the inner solar system is due to the lower volatility of deuterium gas and compounds, enriching deuterium fractions in comets and planets exposed to significant heat from the Sun over billions of years of solar system evolution.
Deuterium is not radioactive, and is not a significant toxicity hazard. Water enriched in H is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for H-nuclear magnetic resonance spectroscopy. Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion.
Hydrogen-3 (tritium)
Tritium, H (atomic mass ), has one proton and two neutrons in its nucleus (triton). It is radioactive, β decaying into helium-3 with half-life . Traces of H occur naturally due to cosmic rays interacting with atmospheric gases. H has also been released in nuclear tests. It is used in fusion bombs, as a tracer in isotope geochemistry, and in self-powered lighting devices.
The most common way to produce H is to bombard a natural isotope of lithium, Li, with neutrons in a nuclear reactor.
Tritium can be used in chemical and biological labeling experiments as a radioactive tracer. Deuterium–tritium fusion uses H and H as its main reactants, giving energy through the loss of mass when the two nuclei collide and fuse at high temperatures.
Hydrogen-4
H (atomic mass ), with one proton and three neutrons, is a highly unstable isotope. It has been synthesized in the laboratory by bombarding tritium with fast-moving deuterons; the triton captured a neutron from the deuteron. The presence of H was deduced by detecting the emitted protons. It decays by neutron emission into H with a half-life of (or ).
In the 1955 satirical novel The Mouse That Roared, the name quadium was given to the H that powered the Q-bomb that the Duchy of Grand Fenwick captured from the United States.
Hydrogen-5
H (atomic mass ), with one proton and four neutrons, is highly unstable. It has been synthesized in the lab by bombarding tritium with fast-moving tritons; one triton captures two neutrons from the other, becoming a nucleus with one proton and four neutrons. The remaining proton may be detected, and the existence of H deduced. It decays by double neutron emission into H and has a half-life of () – the shortest half-life of any known nuclide.
Hydrogen-6
H (atomic mass ) has one proton and five neutrons. It has a half-life of ().
Hydrogen-7
H (atomic mass ) has one proton and six neutrons. It was first synthesized in 2003 by a group of Russian, Japanese and French scientists at Riken's Radioactive Isotope Beam Factory by bombarding hydrogen with helium-8 atoms; all six of the helium-8's neutrons were donated to the hydrogen nucleus. The two remaining protons were detected by the "RIKEN telescope", a device made of several layers of sensors, positioned behind the target of the RI Beam cyclotron. H has a half-life of ().
Decay chains
H and H decay directly to H, which then decays to stable He. Decay of the heaviest isotopes, H and H, has not been experimentally observed.
Decay times are in yoctoseconds () for all these isotopes except H, which is in years.
See also
Hydrogen atom
Hydrogen isotope biogeochemistry
Hydrogen-4.1 (Muonic helium)
Muonium – acts like an exotic light isotope of hydrogen
Notes
References
Further reading
Hydrogen
Hydrogen | Isotopes of hydrogen | [
"Chemistry"
] | 1,491 | [
"Lists of isotopes by element",
"Isotopes of hydrogen",
"Isotopes"
] |
2,039,930 | https://en.wikipedia.org/wiki/Ensembl%20genome%20database%20project | Ensembl genome database project is a scientific project at the European Bioinformatics Institute, which provides a centralized resource for geneticists, molecular biologists and other researchers studying the genomes of our own species and other vertebrates and model organisms. Ensembl is one of several well known genome browsers for the retrieval of genomic information.
Similar databases and browsers are found at NCBI and the University of California, Santa Cruz (UCSC).
History
The human genome consists of three billion base pairs, which code for approximately 20,000–25,000 genes. However the genome alone is of little use, unless the locations and relationships of individual genes can be identified. One option is manual annotation, whereby a team of scientists tries to locate genes using experimental data from scientific journals and public databases. However this is a slow, painstaking task. The alternative, known as automated annotation, is to use the power of computers to do the complex pattern-matching of protein to DNA. The Ensembl project was launched in 1999 in response to the imminent completion of the Human Genome Project, with the initial goals of automatically annotate the human genome, integrate this annotation with available biological data and make all this knowledge publicly available.
In the Ensembl project, sequence data are fed into the gene annotation system (a collection of software "pipelines" written in Perl) which creates a set of predicted gene locations and saves them in a MySQL database for subsequent analysis and display. Ensembl makes these data freely accessible to the world research community. All the data and code produced by the Ensembl project is available to download, and there is also a publicly accessible database server allowing remote access. In addition, the Ensembl website provides computer-generated visual displays of much of the data.
Over time the project has expanded to include additional species (including key model organisms such as mouse, fruitfly and zebrafish) as well as a wider range of genomic data, including genetic variations and regulatory features. Since April 2009, a sister project, Ensembl Genomes, has extended the scope of Ensembl into invertebrate metazoa, plants, fungi, bacteria, and protists, focusing on providing taxonomic and evolutionary context to genes, whilst the original project continues to focus on vertebrates.
As of 2020, Ensembl supported over 50 000 genomes across both Ensembl and Ensembl Genomes databases, adding some new innovative features such as Rapid Release, a new website designed to make genome annotation data available more quickly to users, and COVID-19, a new website to access to SARS-CoV-2 reference genome.
Displaying genomic data
Central to the Ensembl concept is the ability to automatically generate graphical views of the alignment of genes and other genomic data against a reference genome. These are shown as data tracks, and individual tracks can be turned on and off, allowing the user to customise the display to suit their research interests. The interface also enables the user to zoom in to a region or move along the genome in either direction.
Other displays show data at varying levels of resolution, from whole karyotypes down to text-based representations of DNA and amino acid sequences, or present other types of display such as trees of similar genes (homologues) across a range of species. The graphics are complemented by tabular displays, and in many cases data can be exported directly from the page in a variety of standard file formats such as FASTA.
Externally produced data can also be added to the display by uploading a suitable file in one of the supported formats, such as BAM, BED, or PSL.
Graphics are generated using a suite of custom Perl modules based on GD, the standard Perl graphics display library.
Alternative access methods
In addition to its website, Ensembl provides a REST API and a Perl API (Application Programming Interface) that models biological objects such as genes and proteins, allowing simple scripts to be written to retrieve data of interest. The same API is used internally by the web interface to display the data. It is divided in sections like the core API, the compara API (for comparative genomics data), the variation API (for accessing SNPs, SNVs, CNVs..), and the functional genomics API (to access regulatory data).
The Ensembl website provides extensive information on how to install and use the API.
This software can be used to access the public MySQL database, avoiding the need to download enormous datasets. The users could even choose to retrieve data from the MySQL with direct SQL queries, but this requires an extensive knowledge of the current database schema.
Large datasets can be retrieved using the BioMart data-mining tool. It provides a web interface for downloading datasets using complex queries.
Last, there is an FTP server which can be used to download entire MySQL databases as well some selected data sets in other formats.
Current species
The annotated genomes include most fully sequenced vertebrates and selected model organisms. All of them are eukaryotes, there are no prokaryotes. As of 2022, there are 271 species registered, this includes:
Open source/mirrors
All data part of the Ensembl project is open access and all software is open source, being freely available to the scientific community, under a CC BY 4.0 license. Currently, Ensembl database website is mirrored at three different locations worldwide to improve the service.
See also
List of sequenced eukaryotic genomes
List of biological databases
Sequence analysis
Sequence profiling tool
Sequence motif
UCSC Genome Browser
ENCODE
References
External links
Vega
Pre-Ensembl
Ensembl genomes
UCSC Genome Browser
NCBI
Ensembl: Browsing chordate genomes on EBI Train OnLine
Genetic engineering in the United Kingdom
Genome databases
Medical databases in the United Kingdom
Medical genetics
Science and technology in Cambridgeshire
South Cambridgeshire District
Wellcome Trust
Biological databases
Bioinformatics
Computational biology | Ensembl genome database project | [
"Engineering",
"Biology"
] | 1,277 | [
"Bioinformatics",
"Biological engineering",
"Biological databases",
"Computational biology"
] |
2,040,454 | https://en.wikipedia.org/wiki/Madelung%20constant | The Madelung constant is used in determining the electrostatic potential of a single ion in a crystal by approximating the ions by point charges. It is named after Erwin Madelung, a German physicist.
Because the anions and cations in an ionic solid attract each other by virtue of their opposing charges, separating the ions requires a certain amount of energy. This energy must be given to the system in order to break the anion–cation bonds. The energy required to break these bonds for one mole of an ionic solid under standard conditions is the lattice energy.
Formal expression
The Madelung constant allows for the calculation of the electric potential of the ion at position due to all other ions of the lattice
where is the distance between the th and the th ion. In addition,
number of charges of the th ion
the elementary charge, 1.6022 C
; is the permittivity of free space.
If the distances are normalized to the nearest neighbor distance , the potential may be written
with being the (dimensionless) Madelung constant of the th ion
Another convention is to base the reference length on the cubic root of the unit cell volume, which for cubic systems is equal to the lattice constant. Thus, the Madelung constant then reads
The electrostatic energy of the ion at site then is the product of its charge with the potential acting at its site
There occur as many Madelung constants in a crystal structure as ions occupy different lattice sites. For example, for the ionic crystal NaCl, there arise two Madelung constants – one for Na and another for Cl. Since both ions, however, occupy lattice sites of the same symmetry they both are of the same magnitude and differ only by sign. The electrical charge of the and ion are assumed to be onefold positive and negative, respectively, and . The nearest neighbour distance amounts to half the lattice constant of the cubic unit cell and the Madelung constants become
The prime indicates that the term is to be left out. Since this sum is conditionally convergent it is not suitable as definition of Madelung's constant unless the order of summation is also specified. There are two "obvious" methods of summing this series, by expanding cubes or expanding spheres. Although the latter is often found in the literature,
it fails to converge, as was shown by Emersleben in 1951. The summation over expanding cubes converges to the correct value, although very slowly. An alternative summation procedure, presented by Borwein, Borwein and Taylor, uses analytic continuation of an absolutely convergent series.
There are many practical methods for calculating Madelung's constant using either direct summation (for example, the Evjen method) or integral transforms, which are used in the Ewald method. A fast converging formula for the Madelung constant of NaCl is
The continuous reduction of with decreasing coordination number for the three cubic AB compounds (when accounting for the doubled charges in ZnS) explains the observed propensity of alkali halides to crystallize in the structure with highest compatible with their ionic radii. Note also how the fluorite structure being intermediate between the caesium chloride and sphalerite structures is reflected in the Madelung constants.
Generalization
It is assumed for the calculation of Madelung constants that an ion's charge density may be approximated by a point charge. This is allowed, if the electron distribution of the ion is spherically symmetric. In particular cases, however, when the ions reside on lattice site of certain crystallographic point groups, the inclusion of higher order moments, i.e. multipole moments of the charge density might be required. It is shown by electrostatics that the interaction between two point charges only accounts for the first term of a general Taylor series describing the interaction between two charge distributions of arbitrary shape. Accordingly, the Madelung constant only represents the monopole-monopole term.
The electrostatic interaction model of ions in solids has thus been extended to a point multipole concept that also includes higher multipole moments like dipoles, quadrupoles etc. These concepts require the determination of higher order Madelung constants or so-called electrostatic lattice constants. The proper calculation of electrostatic lattice constants has to consider the crystallographic point groups of ionic lattice sites; for instance, dipole moments may only arise on polar lattice sites, i. e. exhibiting a C1, C1h, Cn or Cnv site symmetry (n = 2, 3, 4 or 6). These second order Madelung constants turned out to have significant effects on the lattice energy and other physical properties of heteropolar crystals.
Application to organic salts
The Madelung constant is also a useful quantity in describing the lattice energy of organic salts. Izgorodina and coworkers have described a generalised method (called the EUGEN method) of calculating the Madelung constant for any crystal structure.
References
External links
Crystallography
Physical constants
Physical chemistry
Solid-state chemistry
Theoretical chemistry | Madelung constant | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,043 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Materials science",
"Theoretical chemistry",
"Crystallography",
"Physical constants",
"Condensed matter physics",
"nan",
"Physical chemistry",
"Solid-state chemistry"
] |
2,041,043 | https://en.wikipedia.org/wiki/Hepatopancreas | The hepatopancreas, digestive gland or midgut gland is an organ of the digestive tract of arthropods and molluscs. It provides the functions which in mammals are provided separately by the liver and pancreas, including the production of digestive enzymes, and absorption of digested food.
Arthropods
Arthropods, especially detritivores in the Order Isopoda, Suborder Oniscidea (woodlice), have been shown to be able to store heavy metals in their hepatopancreas. This could lead to bioaccumulation through the food chain and implications for food web destruction, if the accumulation gets high enough in polluted areas; for example, high metal concentrations are seen in spiders of the genus Dysdera which feed on woodlice, including their hepatopancreas, the major metal storage organ of isopods in polluted sites.
Molluscs
The hepatopancreas is a centre for lipid metabolism and for storage of lipids in gastropods.
Some species in the genus Phyllodesmium contains active zooxanthellae of the genus Symbiodinium in the hepatopancreas.
See also
Crab duplex-specific nuclease
Digestive system of gastropods
Tomalley, the hepatopancreas of crustaceans, often used as food
References
Digestive system
Mollusc anatomy
Gastropod anatomy
Arthropod anatomy
Fish anatomy | Hepatopancreas | [
"Biology"
] | 309 | [
"Digestive system",
"Organ systems"
] |
2,041,168 | https://en.wikipedia.org/wiki/Shear%20rate | In physics, mechanics and other areas of science, shear rate is the rate at which a progressive shear strain is applied to some material, causing shearing to the material. Shear rate is a measure of how the velocity changes with distance.
Simple shear
The shear rate for a fluid flowing between two parallel plates, one moving at a constant speed and the other one stationary (Couette flow), is defined by
where:
is the shear rate, measured in reciprocal seconds;
is the velocity of the moving plate, measured in meters per second;
is the distance between the two parallel plates, measured in meters.
Or:
For the simple shear case, it is just a gradient of velocity in a flowing material. The SI unit of measurement for shear rate is s−1, expressed as "reciprocal seconds" or "inverse seconds". However, when modelling fluids in 3D, it is common to consider a scalar value for the shear rate by calculating the second invariant of the strain-rate tensor
.
The shear rate at the inner wall of a Newtonian fluid flowing within a pipe is
where:
is the shear rate, measured in reciprocal seconds;
is the linear fluid velocity;
is the inside diameter of the pipe.
The linear fluid velocity is related to the volumetric flow rate by
where is the cross-sectional area of the pipe, which for an inside pipe radius of is given by
thus producing
Substituting the above into the earlier equation for the shear rate of a Newtonian fluid flowing within a pipe, and noting (in the denominator) that :
which simplifies to the following equivalent form for wall shear rate in terms of volumetric flow rate and inner pipe radius :
For a Newtonian fluid wall, shear stress () can be related to shear rate by where is the dynamic viscosity of the fluid. For non-Newtonian fluids, there are different constitutive laws depending on the fluid, which relates the stress tensor to the shear rate tensor.
References
See also
Shear strain
Strain rate
Non-Newtonian fluid
Continuum mechanics
Temporal rates | Shear rate | [
"Physics"
] | 413 | [
"Temporal quantities",
"Physical quantities",
"Continuum mechanics",
"Temporal rates",
"Classical mechanics"
] |
2,041,176 | https://en.wikipedia.org/wiki/Simple%20shear | Simple shear is a deformation in which parallel planes in a material remain parallel and maintain a constant distance, while translating relative to each other.
In fluid mechanics
In fluid mechanics, simple shear is a special case of deformation where only one component of velocity vectors has a non-zero value:
And the gradient of velocity is constant and perpendicular to the velocity itself:
,
where is the shear rate and:
The displacement gradient tensor Γ for this deformation has only one nonzero term:
Simple shear with the rate is the combination of pure shear strain with the rate of and rotation with the rate of :
The mathematical model representing simple shear is a shear mapping restricted to the physical limits. It is an elementary linear transformation represented by a matrix. The model may represent laminar flow velocity at varying depths of a long channel with constant cross-section. Limited shear deformation is also used in vibration control, for instance base isolation of buildings for limiting earthquake damage.
In solid mechanics
In solid mechanics, a simple shear deformation is defined as an isochoric plane deformation in which there are a set of line elements with a given reference orientation that do not change length and orientation during the deformation. This deformation is differentiated from a pure shear by virtue of the presence of a rigid rotation of the material. When rubber deforms under simple shear, its stress-strain behavior is approximately linear. A rod under torsion is a practical example for a body under simple shear.
If e1 is the fixed reference orientation in which line elements do not deform during the deformation and e1 − e2 is the plane of deformation, then the deformation gradient in simple shear can be expressed as
We can also write the deformation gradient as
Simple shear stress–strain relation
In linear elasticity, shear stress, denoted , is related to shear strain, denoted , by the following equation:
where is the shear modulus of the material, given by
Here is Young's modulus and is Poisson's ratio. Combining gives
See also
Deformation (mechanics)
Infinitesimal strain theory
Finite strain theory
Pure shear
References
Fluid mechanics
Continuum mechanics | Simple shear | [
"Physics",
"Engineering"
] | 419 | [
"Civil engineering",
"Fluid mechanics",
"Classical mechanics",
"Continuum mechanics"
] |
2,041,980 | https://en.wikipedia.org/wiki/Couette%20flow | In fluid dynamics, Couette flow is the flow of a viscous fluid in the space between two surfaces, one of which is moving tangentially relative to the other. The relative motion of the surfaces imposes a shear stress on the fluid and induces flow. Depending on the definition of the term, there may also be an applied pressure gradient in the flow direction.
The Couette configuration models certain practical problems, like the Earth's mantle and atmosphere, and flow in lightly loaded journal bearings. It is also employed in viscometry and to demonstrate approximations of reversibility.
It is named after Maurice Couette, a Professor of Physics at the French University of Angers in the late 19th century.
Planar Couette flow
Couette flow is frequently used in undergraduate physics and engineering courses to illustrate shear-driven fluid motion. A simple configuration corresponds to two infinite, parallel plates separated by a distance ; one plate translates with a constant relative velocity in its own plane. Neglecting pressure gradients, the Navier–Stokes equations simplify to
where is the spatial coordinate normal to the plates and is the velocity field. This equation reflects the assumption that the flow is unidirectional — that is, only one of the three velocity components is non-trivial. If the lower plate corresponds to , the boundary conditions are and . The exact solution
can be found by integrating twice and solving for the constants using the boundary conditions.
A notable aspect of the flow is that shear stress is constant throughout the domain. In particular, the first derivative of the velocity, , is constant. According to Newton's Law of Viscosity (Newtonian fluid), the shear stress is the product of this expression and the (constant) fluid viscosity.
Startup
In reality, the Couette solution is not reached instantaneously. The "startup problem" describing the approach to steady state is given by
subject to the initial condition
and with the same boundary conditions as the steady flow:
The problem can be made homogeneous by subtracting the steady solution. Then, applying separation of variables leads to the solution:
.
The timescale describing relaxation to steady state is , as illustrated in the figure. The time required to reach the steady state depends only on the spacing between the plates and the kinematic viscosity of the fluid, but not on .
Planar flow with pressure gradient
A more general Couette flow includes a constant pressure gradient in a direction parallel to the plates. The Navier–Stokes equations are
where is the dynamic viscosity. Integrating the above equation twice and applying the boundary conditions (same as in the case of Couette flow without pressure gradient) gives
The pressure gradient can be positive (adverse pressure gradient) or negative (favorable pressure gradient). In the limiting case of stationary plates (), the flow is referred to as Plane Poiseuille flow, and has a symmetric (with reference to the horizontal mid-plane) parabolic velocity profile.
Compressible flow
In incompressible flow, the velocity profile is linear because the fluid temperature is constant. When the upper and lower walls are maintained at different temperatures, the velocity profile is more complicated. However, it has an exact implicit solution as shown by C. R. Illingworth in 1950.
Consider the plane Couette flow with lower wall at rest and the upper wall in motion with constant velocity . Denote fluid properties at the lower wall with subscript and properties at the upper wall with subscript . The properties and the pressure at the upper wall are prescribed and taken as reference quantities. Let be the distance between the two walls. The boundary conditions are
where is the specific enthalpy and is the specific heat. Conservation of mass and -momentum requires everywhere in the flow domain. Conservation of energy and -momentum reduce to
where is the wall shear stress. The flow does not depend on the Reynolds number , but rather on the Prandtl number and the Mach number , where is the thermal conductivity, is the speed of sound and is the specific heat ratio. Introduce the non-dimensional variables
In terms of these quantities, the solutions are
where is the heat transferred per unit time per unit area from the lower wall. Thus are implicit functions of . One can also write the solution in terms of the recovery temperature and recovery enthalpy evaluated at the temperature of an insulated wall i.e., the values of and for which . Then the solution is
If the specific heat is constant, then . When and , then and are constant everywhere, thus recovering the incompressible Couette flow solution. Otherwise, one must know the full temperature dependence of . While there is no simple expression for that is both accurate and general, there are several approximations for certain materials — see, e.g., temperature dependence of viscosity. When and , the recovery quantities become unity . For air, the values are commonly used, and the results for this case are shown in the figure.
The effects of dissociation and ionization (i.e., is not constant) have also been studied; in that case the recovery temperature is reduced by the dissociation of molecules.
Rectangular channel
One-dimensional flow is valid when both plates are infinitely long in the streamwise () and spanwise () directions. When the spanwise length is finite, the flow becomes two-dimensional and is a function of both and . However, the infinite length in the streamwise direction must be retained in order to ensure the unidirectional nature of the flow.
As an example, consider an infinitely long rectangular channel with transverse height and spanwise width , subject to the condition that the top wall moves with a constant velocity . Without an imposed pressure gradient, the Navier–Stokes equations reduce to
with boundary conditions
Using separation of variables, the solution is given by
When , the planar Couette flow is recovered, as shown in the figure.
Coaxial cylinders
Taylor–Couette flow is a flow between two rotating, infinitely long, coaxial cylinders. The original problem was solved by Stokes in 1845, but Geoffrey Ingram Taylor's name was attached to the flow because he studied its stability in a famous 1923 paper.
The problem can be solved in cylindrical coordinates . Denote the radii of the inner and outer cylinders as and . Assuming the cylinders rotate at constant angular velocities and , then the velocity in the -direction is
This equation shows that the effects of curvature no longer allow for constant shear in the flow domain.
Coaxial cylinders of finite length
The classical Taylor–Couette flow problem assumes infinitely long cylinders; if the cylinders have non-negligible finite length , then the analysis must be modified (though the flow is still unidirectional). For , the finite-length problem can be solved using separation of variables or integral transforms, giving:
where are the Modified Bessel functions of the first and second kind.
See also
Laminar flow
Stokes-Couette flow
Hagen–Poiseuille equation
Taylor–Couette flow
Hagen–Poiseuille flow from the Navier–Stokes equations
References
Sources
Liepmann, H. W., and Z. O. Bleviss. "The effects of dissociation and ionization on compressible couette flow." Douglas Aircraft Co. Rept. SM-19831 130 (1956).
Liepmann, Hans Wolfgang, and Anatol Roshko. Elements of gasdynamics. Courier Corporation, 1957.
Richard Feynman (1964) The Feynman Lectures on Physics: Mainly Electromagnetism and Matter, § 41–6 Couette flow, Addison–Wesley
External links
AMS Glossary: Couette Flow
A rheologists perspective: the science behind the couette cell accessory
Flow regimes
Fluid dynamics | Couette flow | [
"Chemistry",
"Engineering"
] | 1,607 | [
"Piping",
"Chemical engineering",
"Flow regimes",
"Fluid dynamics"
] |
2,043,467 | https://en.wikipedia.org/wiki/Heat%20recovery%20steam%20generator | A heat recovery steam generator (HRSG) is an energy recovery heat exchanger that recovers heat from a hot gas stream, such as a combustion turbine or other waste gas stream. It produces steam that can be used in a process (cogeneration) or used to drive a steam turbine (combined cycle).
HRSGs
HRSGs consist of four major components: the economizer, evaporator, superheater and water preheater. The different components are put together to meet the operating requirements of the unit. See the attached illustration of a Modular HRSG General Arrangement.
Modular HRSGs can be categorized by a number of ways such as direction of exhaust gas flow or number of pressure levels. Based on the flow of exhaust gases, HRSGs are categorized into vertical and horizontal types. In horizontal type HRSGs, exhaust gas flows horizontally over vertical tubes whereas in vertical type HRSGs, exhaust gas flows vertically over horizontal tubes. Based on pressure levels, HRSGs can be categorized into single pressure and multi pressure. Single pressure HRSGs have only one steam drum and steam is generated at a single pressure level, whereas multi pressure HRSGs employ two (double pressure) or three (triple pressure) steam drums. As such, triple pressure HRSGs consist of three sections: an LP (low pressure) section, a reheat/IP (intermediate pressure) section, and an HP (high pressure) section. Each section has a steam drum and an evaporator section where water is converted to steam. This steam then passes through superheaters to raise the temperature beyond the saturation point.
The steam and water pressure parts of an HRSG are subjected to a wide range of degradation mechanisms, for example creep, thermal fatigue, creep-fatigue, mechanical fatigue, Flow Accelerated Corrosion (FAC), corrosion and corrosion fatigue, amongst others.
Additionally, HRSGs can include cold water heat exchangers designed to condense moisture in flue gases, reducing emissions and increasing efficiency.
Packaged HRSGs
Packaged HRSGs are designed to be shipped as a fully assembled unit from the factory. They can be used in waste heat or turbine (usually under 20 MW) applications. The packaged HRSG can have a water-cooled furnace, which allows for higher supplemental firing and better overall efficiency.
Variations
Some HRSGs include supplemental, or duct firing. These additional burners provide additional energy to the HRSG, which produces more steam and hence increases the output of the steam turbine. Generally, duct firing provides electrical output at lower capital cost. It is therefore often utilized for peaking operations.
HRSGs can also have diverter valves to regulate the inlet flow into the HRSG. This allows the gas turbine to continue to operate when there is no steam demand or if the HRSG needs to be taken offline.
Emissions controls may also be located in the HRSG. Some may contain a selective catalytic reduction system to reduce nitrogen oxides (a large contributor to the formation of smog and acid rain) or a catalyst to remove carbon monoxide. The inclusion of an SCR dramatically affects the layout of the HRSG. NOx catalyst performs best in temperatures between . This usually means that the evaporator section of the HRSG will have to be split and the SCR placed in between the two sections. Some low-temperature NOx catalysts have recently come to market that allow for the SCR to be placed between the evaporator and economizer sections ().
Once-through steam generator (OTSG)
A specialized type of HRSG without boiler drums is the once-through steam generator. In this design, the inlet feedwater follows a continuous path without segmented sections for economizers, evaporators, and superheaters. This provides a high degree of flexibility as the sections are allowed to grow or contract based on the heat load being received from the gas turbine. The absence of drums allows for quick changes in steam production and fewer variables to control, and is ideal for cycling and base load operation. With proper material selection, an OTSG can be run dry, meaning the hot exhaust gases can pass over the tubes with no water flowing inside the tubes. This eliminates the need for a bypass stack and exhaust gas diverter system which is required to operate a combustion turbine with a drum-type HRSG out of service.
Applications
Heat recovery can be used extensively in energy projects.
In the energy-rich Persian Gulf region, the steam from the HRSG is used for desalination plants.
Universities are ideal candidates for HRSG applications. They can use a gas turbine to produce high-reliability electricity for campus use. The HRSG can recover the heat from the gas turbine to produce steam/hot water for district heating or cooling.
Large ocean vessels (e.g., Emma Maersk) make use of heat recovery so that their oil-fired boilers can be shut down while underway.
Block diagram
See also
Exhaust heat recovery system
BMW Turbosteamer
Oxygenated treatment
Monotube steam generator
References
External links
HRSG Users
Steam power
Energy recovery | Heat recovery steam generator | [
"Physics"
] | 1,037 | [
"Power (physics)",
"Steam power",
"Physical quantities"
] |
2,043,808 | https://en.wikipedia.org/wiki/International%20School%20of%20New%20Media | International School of New Media (short ISNM) in Lübeck, Germany was an international, affiliated private institute at the University of Lübeck. It was closed end of 2011.
ISNM was established in 2001 by founding director Hubertus von Amelunxen for the purpose of providing a course that combines the technological, scientific, social, economical, and cultural aspects and implications of New Media.
At the ISNM, students work and study with peers with diverse cultural and educational backgrounds. For instance, architects, computer scientists, artists, and sociologists can work together on a project as a team.
Until 2008, ISNM offered a graduate program conducted entirely in English leading to an award of a M.Sc. in Digital Media degree. The 24-month-long program received international accreditation by ZeVA Hannover. The M.Sc. in Digital Media program provided academic training in interdisciplinary media competence. It connected Media Technology, Computer Science, design and e-commerce with the New Media in the arts, culture and society. The M.Sc. degree qualified students for positions at the intersections of digital media in business, industry, research, education, tourism and international organisations, among others.
The ISNM’s interdisciplinary graduate program in Digital Media aimed at creating decision-makers and managers in every sector and discipline with a more complete understanding of digital media's role in the 21st century’s global world and market. The program followed a concerted strategy to reach this goal by providing a unique combination of technology, business, research, arts, design and culture in the Digital Media sector. Substantial emphasis was placed on the transfer of learning from the university to the work setting and an integration of students’ diverse backgrounds into interdisciplinary project management and intercultural teamwork.
ISNM founders have set the standard for the faculty whose teaching and research is consistently influencing New Media practices. The faculty is a group with honors and research awards on their credits.
References
External links
International School of New Media website
ISNM Research
Educational institutions established in 2001
International schools in Germany
Lübeck
Buildings and structures in Lübeck
Digital media schools
New media
Schools in Schleswig-Holstein
Universities and colleges in Schleswig-Holstein
2001 establishments in Germany | International School of New Media | [
"Technology"
] | 440 | [
"Multimedia",
"New media"
] |
27,629,035 | https://en.wikipedia.org/wiki/Coalition-proof%20Nash%20equilibrium | The concept of coalition-proof Nash equilibrium applies to certain "noncooperative" environments in which players can freely discuss their strategies but cannot make binding commitments.
It emphasizes the immunization to deviations that are self-enforcing. While the best-response property in Nash equilibrium is necessary for self-enforceability, it is not generally sufficient when players can jointly deviate in a way that is mutually beneficial.
The Strong Nash equilibrium is criticized as too "strong" in that the environment allows for unlimited private communication. In the coalition-proof Nash equilibrium the private communication is limited.
Definition
Informally:
At first all players are in a room deliberating their strategies. Then one by one, they leave the room fixing their strategy and only those left are allowed to change their strategies, both individually and together.
Formal definition:
In a single player, single stage game , is a Perfectly Coalition-Proof Nash equilibrium if and only if maximizes .
Let . Assume that a Perfectly Coalition-Proof Nash equilibrium has been defined for all games with players and stages, where , and .
For any game with players and stages, is perfectly self-enforcing if, for all in (set of all coalitions), is a Perfectly Coalition-Proof Nash equilibrium in the game , and if the restriction of to any proper subgame forms a Perfectly Coalition-Proof Nash equilibrium in that subgame.
For any game with players and stages, is a Perfectly Coalition-Proof Nash equilibrium if it is perfectly self-enforcing, and if there does not exist another perfectly self-enforcing strategy vector in such that for all .
The coalition-proof Nash equilibrium refines the Nash equilibrium by adopting a stronger notion of self-enforceability that allows multilateral deviations.
Parallel to the idea of correlated equilibrium as an extension to Nash equilibrium when public signalling device is allowed, coalition-proof equilibrium is defined by Diego Moreno and John Wooders.
References
Game theory equilibrium concepts | Coalition-proof Nash equilibrium | [
"Mathematics"
] | 388 | [
"Game theory",
"Game theory equilibrium concepts"
] |
27,630,088 | https://en.wikipedia.org/wiki/Catalytic%20chain%20transfer | Catalytic chain transfer (CCT) is a process that can be incorporated into radical polymerization to obtain greater control over the resulting products.
Introduction
Radical polymerization of vinyl monomers, like methyl (metha)acrylate of vinyl acetate is a common (industrial) method to prepare polymeric materials. One of the problems associated with this method is, however, that the radical polymerisation reaction rate is so high that even at short reaction times the polymeric chains are exceedingly long. This has several practical disadvantages, especially for polymer processing (e.g. melt-processing). A solution to this problem is catalytic chain transfer, which is a way to make shorter polymer chains in radical polymerisation processes. The method involves adding a catalytic chain transfer agent to the reaction mixture of the monomer and the radical initiator.
Historical background
Boris Smirnov and Alexander Marchenko (USSR) discovered in 1975 that cobalt porphyrins are able to reduce the molecular weight of PMMA formed during radical polymerization of methacrylates. Later investigations showed that the cobalt dimethylglyoxime complexes were as effective as the porphyrin catalysts and also less oxygen sensitive. Due to their lower oxygen sensitivity these catalysts have been investigated much more thoroughly than the porphyrin catalysts and are the catalysts actually used commercially.
Process
In general, reactions of organic free radicals (•C(CH3)(X)R) with metal-centered radicals (M•) either produce an organometallic complex (reaction 1) or a metal hydride (M-H) and an olefin (CH2=C(X)R) by the metallo radical M• abstracting a β-hydrogen from the organic radical •C(CH3)(X)R (reaction 2).
These organo-radical reactions with metal complexes provides several mechanisms to control radical polymerization of monomers CH2=CH(X). A wide range of metal-centered radicals and organo-metal complexes manifest at least a portion of these reactions. Various transition metal species, including complexes of Cr(I), Mo(III), Fe(I), V(0), Ti(III), and Co(II) have been demonstrated to control molecular weights in radical polymerization of olefins.
The olefin generating reaction 2 can become catalytic, and such catalytic chain transfer reactions are generally used to reduce the polymer molecular weight during the radical polymerization process. Mechanistically, catalytic chain transfer involves hydrogen atom transfer from the organic growing polymeryl radical to cobalt(II), thus leaving a polymer vinyl-end group and a cobalt-hydride species. The Co(por)(H) species has no cis-vacant site for direct insertion of a new olefinic monomer into the Co-H bond to finalize the chain-transfer process, and hence the required olefin insertion also proceeds via a radical pathway.
The best recognized chain transfer catalysts are low spin cobalt(II) complexes and organo-cobalt(III) species, which function as latent storage sites for organo-radicals required to obtain living radical polymerization by several pathways.
The major products of catalytic chain transfer polymerization are vinyl terminated polymer chains. One of the major drawbacks of the process is that catalytic chain transfer polymerization does not produce macromonomers of use in free radical polymerizations, but instead produces addition-fragmentation agents. When a growing polymer chain reacts with the addition fragmentation agent the radical end-group attacks the vinyl bond and forms a bond. However, the resulting product is so hindered that the species undergoes fragmentation, leading eventually to telechelic species.
These addition fragmentation chain transfer agents do form graft copolymers with styrenic and acrylate species however they do so by first forming block copolymers and then incorporating these block copolymers into the main polymer backbone. While high yields of macromonomers are possible with methacrylate monomers, low yields are obtained when using catalytic chain transfer agents during the polymerization of acrylate and styrenic monomers. This has been seen to be due to the interaction of the radical centre with the catalyst during these polymerization reactions.
Utility
The catalytic chain transfer process was commercialized very soon after its discovery. The initial commercial outlet was the production of chemically reactive macromonomers to be incorporated into paints for the automotive industry. Federally mandated VOC restrictions are leading to the elimination of solvents from the automotive finishes and the lower molecular weight chain transfer products are often fluids. Incorporation of monomers such as glycidyl methacrylate or hydroxyethylmethacrylate (HEMA) into the macromonomers aid curing processes. Macromonomers incorporating HEMA can be effective in the dispersion of pigments in the paints. The chemistry is very effective under emulsion polymerisation conditions and has been used in the printing industry since 2000. The vinylic end group acts as an addition fragmentation agent and has been utilised to make multi block copolymers and derivatives used as stress relief agents in dental restoration by 3M.
See also
Radical polymerization
Living polymerization
Cobalt Mediated Radical Polymerization
References
Polymer chemistry
Chemical processes | Catalytic chain transfer | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,085 | [
"Materials science",
"Chemical processes",
"nan",
"Polymer chemistry",
"Chemical process engineering"
] |
28,906,930 | https://en.wikipedia.org/wiki/Temperature%E2%80%93entropy%20diagram | In thermodynamics, a temperature–entropy (T–s) diagram is a thermodynamic diagram used to visualize changes to temperature () and specific entropy () during a thermodynamic process or cycle as the graph of a curve. It is a useful and common tool, particularly because it helps to visualize the heat transfer during a process. For reversible (ideal) processes, the area under the T–s curve of a process is the heat transferred to the system during that process.
Working fluids are often categorized on the basis of the shape of their T–s diagram.
An isentropic process is depicted as a vertical line on a T–s diagram, whereas an isothermal process is a horizontal line.
See also
Carnot cycle
Pressure–volume diagram
Rankine cycle
Saturation vapor curve
Working fluid
Working fluid selection
References
Thermodynamics | Temperature–entropy diagram | [
"Physics",
"Chemistry",
"Mathematics"
] | 185 | [
"Thermodynamics stubs",
"Physical chemistry stubs",
"Thermodynamics",
"Dynamical systems"
] |
28,910,089 | https://en.wikipedia.org/wiki/Pascalization | Pascalization, bridgmanization, high pressure processing (HPP) or high hydrostatic pressure (HHP) processing is a method of preserving and sterilizing food, in which a product is processed under very high pressure, leading to the inactivation of certain microorganisms and enzymes in the food. HPP has a limited effect on covalent bonds within the food product, thus maintaining both the sensory and nutritional aspects of the product. The technique was named after Blaise Pascal, a 17th century French scientist whose work included detailing the effects of pressure on fluids. During pascalization, more than 50,000 pounds per square inch (340 MPa, 3.4 kbar) may be applied for approximately fifteen minutes, leading to the inactivation of yeast, mold, vegetative bacteria, and some viruses and parasites. Pascalization is also known as bridgmanization, named for physicist Percy Williams Bridgman.
Depending on temperature and pressure settings, HPP can achieve either pasteurization-equivalent log reduction or go further to achieve sterilization, which includes killing of endospores. Pasteurization-equivalent HPP can be done in chilled temperatures, while sterilization requires at least under pressure. The pasteurization-equivalent is generally referred to as simply HHP (along other synonyms listed above), while the heated sterilization method is called HPT, for high pressure temperature. Synonyms for HPT include pressure-assisted thermal sterilization (PATS), pressure-enhanced sterilization (PES), high pressure thermal sterilization (HPTS), and high pressure high temperature (HPHT).
Uses
HHP (pasteurization-equivalent)
Spoilage microorganisms and some enzymes can be deactivated by HPP, which can extend the shelf life while preserving the sensory and nutritional characteristics of the product. Pathogenic microorganisms such as Listeria, E. coli, Salmonella, and Vibrio are also sensitive to pressures of 400–1000 MPa used during HPP. Thus, HPP can pasteurize food products with decreased processing time, reduced energy usage, and less waste.
The treatment occurs at low temperatures and does not include the use of food additives. From 1990, some juices, jellies, and jams have been preserved using pascalization in Japan. The technique is now used there to preserve fish and meats, salad dressing, rice cakes, and yogurts. It preserves fruits, vegetable smoothies and other products such as meat for sale in the UK.
An early use of pascalization in the United States was to treat guacamole. It did not change the sauce's taste, texture, or color, but the shelf life of the product increased from three days to 30 days. Some treated foods require cold storage because pascalization cannot destroy all proteins, some of them exhibiting enzymatic activity which affects shelf life.
In recent years, HPP has also been used in the processing of raw pet food. Most commercial frozen and freeze-dried raw diets now go through post-packaging HPP treatment to destroy potential bacterial and viral contaminants, with salmonella being one of the major concerns.
HPT (commercial sterility)
Low-acid food require the killing of endospores to become shelf-stable. Addition of heat on top of pressure, as in HPT, achieves this goal. In 2009, FDA issued no objections to a petition for using HPT, specifically the type known as PATS, on mashed potato. In 2015, the FDA issued another no-objection for PES, another type of HPT, on seafood. Application of HPT to other types of fruit is still being explored.
Other uses
A short-duration application of HHP is able to separate the meat of shellfish from their shells, making hand-peeling much easier. HHP also inactivates Vibrio bacteria. HHP is used in 7% of seafood and shellfish.
History
Late 19th century
Experiments into the effects of pressure on microorganisms have been recorded as early as 1884, and successful experiments since 1897. In 1899, B. H. Hite was the first to conclusively demonstrate the inactivation of microorganisms by pressure. After he reported the effects of high pressure on microorganisms, reports on the effects of pressure on foods quickly followed. Hite tried to prevent milk from spoiling, and his work showed that microorganisms can be deactivated by subjecting it to high pressure. He also mentioned some advantages of pressure-treating foods, such as the lack of antiseptics and no change in taste.
Hite said that, since 1897, a chemist at the West Virginia Agricultural Experimental Station had been studying the relationship between pressure and the preservation of meats, juices, and milk. Early experiments involved inserting a large screw into a cylinder and keeping it there for several days, but this did not have any effect in stopping the milk from spoiling. Later, a more powerful apparatus was able to subject the milk to higher pressures, and the treated milk was reported to stay sweeter for 24–60 hours longer than untreated milk. When of pressure was applied to samples of milk for one hour, they stayed sweet for one week. The device used to induce pressure was later damaged when researchers tried to test its effects on other products.
Experiments were also performed with anthrax, typhoid, and tuberculosis, which was a potential health risk for the researchers. Before the process was improved, one employee of the Experimental Station became ill with typhoid fever.
The process that Hite reported on was not feasible for widespread use and did not always completely sterilize the milk. While more extensive investigations followed, the original study into milk was largely discontinued due to concerns over its effectiveness. Hite mentioned "certain slow changes in the milk" related to "enzymes that the pressure could not destroy".
Early 20th century
Hite et al. released a more detailed report on pressure sterilization in 1914, which included the number of microorganisms that remained in a product after treatment. Experiments were conducted on various other foods, including fruits, fruit juices and some vegetables. They were met with mixed success, similar to the results obtained from the earlier tests on milk. While some foods were preserved, others were not, possibly due to bacterial spores that had not been killed.
Hite's 1914 investigation led to other studies into the effect of pressure on microorganisms. In 1918, a study published by W. P. Larson et al. was intended to help advance vaccines. This report showed that bacterial spores were not always inactivated by pressure, while vegetative bacteria were usually killed. Larson et al.'s investigation also focused on the use of carbon dioxide, hydrogen, and nitrogen gas pressures. Carbon dioxide was found to be the most effective of the three at inactivating microorganisms.
Late 20th century–today
Around 1970, researchers renewed their efforts in studying bacterial spores after it was discovered that using moderate pressures was more effective than using higher pressures. These spores, which caused a lack of preservation in the earlier experiments, were inactivated faster by moderate pressure, but in a manner different from what occurred with vegetative microbes. When subjected to moderate pressures, bacterial spores germinate, and the resulting spores are easily killed using pressure, heat, or ionizing radiation. If the amount of initial pressure is increased, conditions are not ideal for germination, so the original spores must be killed instead. Using moderate pressure does not always work, as some bacterial spores are more resistant to germination under pressure and a small portion of them will survive. A preservation method using both pressure and another treatment (such as heat) to kill spores has not yet been reliably achieved. Such a technique would allow for wider use of pressure on food and other potential advancements in food preservation.
Research into the effects of high pressures on microorganisms was largely focused on deep-sea organisms until the 1980s, when advancements in ceramic processing were made. This resulted in the production of machinery that allowed for processing foods at high pressures at a large scale, and generated some interest in the technique, especially in Japan. Although commercial products preserved by pascalization first emerged in 1990, the technology behind pascalization is still being perfected for widespread use. There is now higher demand for minimally processed products than in previous years, and products preserved by pascalization have seen commercial success despite being priced significantly higher than products treated with standard methods.
In the early 2000s, it was discovered that pascalization can separate the meat of shellfish from their shells. Lobsters, shrimp, crabs, etc. may be pascalized, and afterwards their raw meat will easily slide whole out of the cracked shell.
Process
In pascalization, food products are sealed and placed into a steel compartment containing a liquid, often water, and pumps are used to create pressure. The pumps may apply pressure constantly or intermittently. The application of high hydrostatic pressures (HHP) on a food product will kill many microorganisms, but the spores are not destroyed. Pascalization works especially well on acidic foods, such as yogurts and fruits, because pressure-tolerant spores are not able to live in environments with low pH levels. The treatment works equally well for both solid and liquid products.
Researchers are also developing a "continuous" method of high pressure processing of preserving liquid foods. The technology is known as ultra-shear technology (UST) or high pressure homogenization. This involves pressurization of liquid foods up to 400 MPa and subsequent depressurization by passage through tiny clearance in a shear valve. When the fluid exits the shear valve, due to significant pressure difference across the valve, the pressure energy is converted into kinetic energy. This kinetic energy is dissipated as heat energy to raise the temperature of the fluid and as heat loss to the surroundings. Remaining kinetic energy is spent on sample physical and structural modifications (mixing, emulsification, dispersion, particle size, enzyme, and microbial reduction) via intense mechanical forces, such as shear, turbulence, or cavitation. Thus, depending upon the product's initial temperature and process pressure, UST treatment can result in pasteurization or commercial sterilization effects along with structural modification in the treated liquid.
Bacterial spores survive pressure treatment at ambient or chilled conditions. The use of additional heat in high pressure temperature (HPT) kills these spores. Food is pre-heated to about before entering the pressure compartment, then the pressure raises food temperature to the desired point ( or higher) by adiabatic heating.
Effects
During pascalization, the food's hydrogen bonds are selectively disrupted. Because pascalization is not heat-based, covalent bonds are not affected, causing no change in the food's taste. Hence, HPP does not destroy vitamins, maintaining the nutritional value of the food. High hydrostatic pressure can affect muscle tissues by increasing the rate of lipid oxidation, which in turn leads to poor flavor and decreased health benefits. There are some compounds present in foods that are subject to change during the treatment process. For example, carbohydrates are gelatinized by an increase in pressure instead of increasing the temperature during the treatment process.
Because hydrostatic pressure is able to act quickly and evenly on food, neither the size of a product's container nor its thickness play a role in the effectiveness of pascalization. There are several side effects of the process, including a slight increase in a product's sweetness, but pascalization does not greatly affect the nutritional value, taste, texture, and appearance. Thus, high pressure treatment of foods is regarded as a "natural" preservation method, as it does not use chemical preservatives.
Criticism
Anurag Sharma, a geochemist; James Scott, a microbiologist; and others at the Carnegie Institution of Washington directly observed microbial activity at pressures in excess of 1 gigapascal. The experiments were performed up to 1.6 GPa (232,000 psi) of pressure, which is more than 16,000 times normal air pressure, or about 14 times the pressure in the Mariana Trench, the deepest ocean trench.
The experiment began by depositing an Escherichia coli and Shewanella oneidensis film in a diamond anvil cell (DAC). The pressure was then raised to 1.6 GPa. When raised to this pressure and kept there for 30 hours, at least 1% of the bacteria survived. The experimenters then monitored formate metabolism using in-situ Raman spectroscopy and showed that formate metabolism continued in the bacterial sample.
Moreover, 1.6 GPa is such great pressure that, during the experiment, the DAC turned the solution into ice-VI, a room-temperature ice. When the bacteria broke down the formate in the ice, liquid pockets would form because of the chemical reaction.
There was some skepticism of this experiment. According to Art Yayanos, an oceanographer at the Scripps Institution of Oceanography, an organism should only be considered living if it can reproduce. Another issue with the DAC experiment is that when high pressures occur, there are usually high temperatures present as well, but in this experiment there were not. This experiment was performed at room temperature. The intentional lack of high temperature in the experiments isolated the actual effects of pressure on life and results clearly indicated life to be largely pressure insensitive.
Newer results from independent research groups have confirmed the results of Sharma et al. (2002). This is a significant step that reiterates the need for a new approach to the old problem of studying environmental extremes through experiments. There is practically no debate on whether microbial life can survive pressures up to 600 MPa, which has been shown over the last decade or so to be valid through a number of scattered publications.
Consumer acceptance
In the consumer studies of HighTech Europe, consumers mentioned more positive than negative associations descriptions for this technology, showing that these products are well-accepted.
See also
Cold-pressed juice
Orders of magnitude (pressure)
Physical factors affecting microbial life
Thermization
References
Notes
Bibliography
Food preservation
Pressure | Pascalization | [
"Physics"
] | 2,922 | [
"Scalar physical quantities",
"Mechanical quantities",
"Physical quantities",
"Pressure",
"Wikipedia categories named after physical quantities"
] |
6,778,039 | https://en.wikipedia.org/wiki/Dimensional%20modeling | Dimensional modeling (DM) is part of the Business Dimensional Lifecycle methodology developed by Ralph Kimball which includes a set of methods, techniques and concepts for use in data warehouse design. The approach focuses on identifying the key business processes within a business and modelling and implementing these first before adding additional business processes, as a bottom-up approach. An alternative approach from Inmon advocates a top down design of the model of all the enterprise data using tools such as entity-relationship modeling (ER).
Description
Dimensional modeling always uses the concepts of facts (measures), and dimensions (context). Facts are typically (but not always) numeric values that can be aggregated, and dimensions are groups of hierarchies and descriptors that define the facts. For example, sales amount is a fact; timestamp, product, register#, store#, etc. are elements of dimensions. Dimensional models are built by business process area, e.g. store sales, inventory, claims, etc. Because the different business process areas share some but not all dimensions, efficiency in design, operation, and consistency, is achieved using conformed dimensions, i.e. using one copy of the shared dimension across subject areas.
Dimensional modeling does not necessarily involve a relational database. The same modeling approach, at the logical level, can be used for any physical form, such as multidimensional database or even flat files. It is oriented around understandability and performance.
Design method
Designing the model
The dimensional model is built on a star-like schema or snowflake schema, with dimensions surrounding the fact table. To build the schema, the following design model is used:
Choose the business process
Declare the grain
Identify the dimensions
Identify the fact
Choose the business process
The process of dimensional modeling builds on a 4-step design method that helps to ensure the usability of the dimensional model and the use of the data warehouse. The basics in the design build on the actual business process which the data warehouse should cover. Therefore, the first step in the model is to describe the business process which the model builds on. This could for instance be a sales situation in a retail store. To describe the business process, one can choose to do this in plain text or use basic Business Process Model and Notation (BPMN) or other design guides like the Unified Modeling Language |UML).
Declare the grain
After describing the business process, the next step in the design is to declare the grain of the model. The grain of the model is the exact description of what the dimensional model should be focusing on. This could for instance be “An individual line item on a customer slip from a retail store”. To clarify what the grain means, you should pick the central process and describe it with one sentence. Furthermore, the grain (sentence) is what you are going to build your dimensions and fact table from. You might find it necessary to go back to this step to alter the grain due to new information gained on what your model is supposed to be able to deliver.
Identify the dimensions
The third step in the design process is to define the dimensions of the model. The dimensions must be defined within the grain from the second step of the 4-step process. Dimensions are the foundation of the fact table, and is where the data for the fact table is collected. Typically dimensions are nouns like date, store, inventory etc. These dimensions are where all the data is stored. For example, the date dimension could contain data such as year, month and weekday.
Identify the facts
After defining the dimensions, the next step in the process is to make keys for the fact table. This step is to identify the numeric facts that will populate each fact table row. This step is closely related to the business users of the system, since this is where they get access to data stored in the data warehouse. Therefore, most of the fact table rows are numerical, additive figures such as quantity or cost per unit, etc.
Dimension normalization
Dimensional normalization or snowflaking removes redundant attributes, which are known in the normal flatten de-normalized dimensions. Dimensions are strictly joined together in sub dimensions.
Snowflaking has an influence on the data structure that differs from many philosophies of data warehouses.
Single data (fact) table surrounded by multiple descriptive (dimension) tables
Developers often don't normalize dimensions due to several reasons:
Normalization makes the data structure more complex
Performance can be slower, due to the many joins between tables
The space savings are minimal
Bitmap indexes can't be used
Query performance. 3NF databases suffer from performance problems when aggregating or retrieving many dimensional values that may require analysis. If you are only going to do operational reports then you may be able to get by with 3NF because your operational user will be looking for very fine grain data.
There are some arguments on why normalization can be useful. It can be an advantage when part of hierarchy is common to more than one dimension. For example, a geographic dimension may be reusable because both the customer and supplier dimensions use it.
Benefits of dimensional modeling
Benefits of the dimensional model are the following:
Understandability. Compared to the normalized model, the dimensional model is easier to understand and more intuitive. In dimensional models, information is grouped into coherent business categories or dimensions, making it easier to read and interpret. Simplicity also allows software to navigate databases efficiently. In normalized models, data is divided into many discrete entities and even a simple business process might result in dozens of tables joined together in a complex way.
Query performance. Dimensional models are more denormalized and optimized for data querying, while normalized models seek to eliminate data redundancies and are optimized for transaction loading and updating. The predictable framework of a dimensional model allows the database to make strong assumptions about the data which may have a positive impact on performance. Each dimension is an equivalent entry point into the fact table, and this symmetrical structure allows effective handling of complex queries. Query optimization for star-joined databases is simple, predictable, and controllable.
Extensibility. Dimensional models are scalable and easily accommodate unexpected new data. Existing tables can be changed in place either by simply adding new data rows into the table or executing SQL alter table commands. No queries or applications that sit on top of the data warehouse need to be reprogrammed to accommodate changes. Old queries and applications continue to run without yielding different results. But in normalized models each modification should be considered carefully, because of the complex dependencies between database tables.
Dimensional models, Hadoop, and big data
We still get the benefits of dimensional models on Hadoop and similar big data frameworks. However, some features of Hadoop require us to slightly adapt the standard approach to dimensional modelling.
The Hadoop File System is immutable. We can only add but not update data. As a result we can only append records to dimension tables. Slowly Changing Dimensions on Hadoop become the default behavior. In order to get the latest and most up to date record in a dimension table we have three options. First, we can create a View that retrieves the latest record using windowing functions. Second, we can have a compaction service running in the background that recreates the latest state. Third, we can store our dimension tables in mutable storage, e.g. HBase and federate queries across the two types of storage.
The way data is distributed across HDFS makes it expensive to join data. In a distributed relational database (MPP) we can co-locate records with the same primary and foreign keys on the same node in a cluster. This makes it relatively cheap to join very large tables. No data needs to travel across the network to perform the join. This is very different on Hadoop and HDFS. On HDFS tables are split into big chunks and distributed across the nodes on our cluster. We don’t have any control on how individual records and their keys are spread across the cluster. As a result joins on Hadoop for two very large tables are quite expensive as data has to travel across the network. We should avoid joins where possible. For a large fact and dimension table we can de-normalize the dimension table directly into the fact table. For two very large transaction tables we can nest the records of the child table inside the parent table and flatten out the data at run time.
Literature
References
Data warehousing
Data modeling | Dimensional modeling | [
"Engineering"
] | 1,734 | [
"Data modeling",
"Data engineering"
] |
6,778,984 | https://en.wikipedia.org/wiki/Discrete%20tomography | Discrete tomography focuses on the problem of reconstruction of binary images (or finite subsets of the integer lattice) from a small number of their projections.
In general, tomography deals with the problem of determining shape and dimensional information of an object from a set of projections. From the mathematical point of view, the object corresponds to a function and the problem posed is to reconstruct this function from its integrals or sums over subsets of its domain. In general, the tomographic inversion problem may be continuous or discrete. In continuous tomography both the domain and the range of the function are continuous and line integrals are used. In discrete tomography the domain of the function may be either discrete or continuous, and the range of the function is a finite set of real, usually nonnegative numbers. In continuous tomography when a large number of projections is available, accurate reconstructions can be made by many different algorithms. It is typical for discrete tomography that only a few projections (line sums) are used. In this case, conventional techniques all fail. A special case of discrete tomography deals with the problem of the reconstruction of a binary image from a small number of projections. The name discrete tomography is due to Larry Shepp, who organized the first meeting devoted to this topic (DIMACS Mini-Symposium on Discrete Tomography, September 19, 1994, Rutgers University).
Theory
Discrete tomography has strong connections with other mathematical fields, such as number theory, discrete mathematics, computational complexity theory and combinatorics. In fact, a number of discrete tomography problems were first discussed as combinatorial problems. In 1957, H. J. Ryser found a necessary and sufficient condition for a pair of vectors being the two orthogonal projections of a discrete set. In the proof of his theorem, Ryser also described a reconstruction algorithm, the very first reconstruction algorithm for a general discrete set from two orthogonal projections. In the same year, David Gale found the same consistency conditions, but in connection with the network flow problem. Another result of Ryser's is the definition of the switching operation by which discrete sets having the same projections can be transformed into each other.
The problem of reconstructing a binary image from a small number of projections generally leads to a large number of solutions. It is desirable to limit the class of possible solutions to only those that are typical of the class of the images which contains the image being reconstructed by using a priori information, such as convexity or connectedness.
Theorems
Reconstructing (finite) planar lattice sets from their 1-dimensional X-rays is an NP-hard problem if the X-rays are taken from lattice directions (for the problem is in P).
The reconstruction problem is highly unstable for (meaning that a small perturbation of the X-rays may lead to completely different reconstructions) and stable for , see.
Coloring a grid using colors with the restriction that each row and each column has a specific number of cells of each color is known as the −atom problem in the discrete tomography community. The problem is NP-hard for , see.
For further results, see
Algorithms
Among the reconstruction methods one can find algebraic reconstruction techniques (e.g., DART or ), greedy algorithms (see for approximation guarantees), and Monte Carlo algorithms.
Applications
Various algorithms have been applied in image processing, medicine, three-dimensional statistical data security problems, computer tomograph assisted engineering and design, electron microscopy and materials science, including the 3DXRD microscope.
A form of discrete tomography also forms the basis of nonograms, a type of logic puzzle in which information about the rows and columns of a digital image is used to reconstruct the image.
See also
Geometric tomography
References
External links
Euro DT (a Discrete Tomography Wiki site for researchers)
Tomography applet by Christoph Dürr
PhD thesis on discrete tomography (2012): Tomographic segmentation and discrete tomography for quantitative analysis of transmission tomography data
Applied mathematics
Digital geometry | Discrete tomography | [
"Mathematics"
] | 818 | [
"Applied mathematics"
] |
6,779,384 | https://en.wikipedia.org/wiki/Hexapod%20%28robotics%29 | A six-legged walking robot should not be confused with a Stewart platform, a kind of parallel manipulator used in robotics applications.
A hexapod robot is a mechanical vehicle that walks on six legs. Since a robot can be statically stable on three or more legs, a hexapod robot has a great deal of flexibility in how it can move. If legs become disabled, the robot may still be able to walk. Furthermore, not all of the robot's legs are needed for stability; other legs are free to reach new foot placements or manipulate a payload.
Many hexapod robots are biologically inspired by Hexapoda locomotion – the insectoid robots. Hexapods may be used to test biological theories about insect locomotion, motor control, and neurobiology.
Designs
Hexapod designs vary in leg arrangement. Insect-inspired robots are typically laterally symmetric, such as the RiSE robot at Carnegie Mellon. A radially symmetric hexapod is ATHLETE (All-Terrain Hex-Legged Extra-Terrestrial Explorer) robot at JPL.
Typically, individual legs range from two to six degrees of freedom. Hexapod feet are typically pointed, but can also be tipped with adhesive material to help climb walls or wheels so the robot can drive quickly when the ground is flat.
Locomotion
Most often, hexapods are controlled by gaits, which allow the robot to move forward, turn, and perhaps side-step. Some of the
most common gaits are as follows:
Alternating tripod: 3 legs on the ground at a time.
Quadruped.
Crawl: move just one leg at a time.
Gaits for hexapods are often stable, even in slightly rocky and uneven terrain.
Motion may also be nongaited, which means the sequence of leg motions is not fixed, but rather chosen by the computer in response to the sensed environment. This may be most helpful in very rocky terrain, but existing techniques for motion planning are computationally expensive.
Biologically inspired
Insects are chosen as models because their nervous system are simpler than other animal species. Also, complex behaviours can be attributed to just a few neurons and the pathway between sensory input and motor output is relatively shorter. Insects' walking behaviour and neural architecture are used to improve robot locomotion. Conversely, biologists can use hexapod robots for testing different hypotheses.
Biologically inspired hexapod robots largely depend on the insect species used as a model. The cockroach and the stick insect are the two most commonly used insect species; both have been ethologically and neurophysiologically extensively studied. At present no complete nervous system is known, therefore, models usually combine different insect models, including those of other insects.
Insect gaits are usually obtained by two approaches: the centralized and the decentralized control architectures. Centralized controllers directly specify transitions of all legs, whereas in decentralized architectures, six nodes (legs) are connected in a parallel network; gaits arise by the interaction between neighbouring legs.
List of robots
Hexbug (insectoid toy robot)
Stiquito (inexpensive insectoid robot)
Rhex
Whegs
LAURON
See also
Biomechanics
Insects
Mondo spider
Robotics
Robot locomotion
Stewart platform
References
External links
Poly-pedal Laboratory at Berkeley (USA).
Biological Cybernetics/Theoretical Biology (Germany).
Robot kinematics
Robot locomotion | Hexapod (robotics) | [
"Physics",
"Engineering"
] | 706 | [
"Physical phenomena",
"Robotics engineering",
"Motion (physics)",
"Robot locomotion",
"Robot kinematics"
] |
6,779,393 | https://en.wikipedia.org/wiki/Riesz%20potential | In mathematics, the Riesz potential is a potential named after its discoverer, the Hungarian mathematician Marcel Riesz. In a sense, the Riesz potential defines an inverse for a power of the Laplace operator on Euclidean space. They generalize to several variables the Riemann–Liouville integrals of one variable.
Definition
If 0 < α < n, then the Riesz potential Iαf of a locally integrable function f on Rn is the function defined by
where the constant is given by
This singular integral is well-defined provided f decays sufficiently rapidly at infinity, specifically if f ∈ Lp(Rn) with 1 ≤ p < n/α. In fact, for any 1 ≤ p (p>1 is classical, due to Sobolev, while for p=1 see , the rate of decay of f and that of Iαf are related in the form of an inequality (the Hardy–Littlewood–Sobolev inequality)
where is the vector-valued Riesz transform. More generally, the operators Iα are well-defined for complex α such that .
The Riesz potential can be defined more generally in a weak sense as the convolution
where Kα is the locally integrable function:
The Riesz potential can therefore be defined whenever f is a compactly supported distribution. In this connection, the Riesz potential of a positive Borel measure μ with compact support is chiefly of interest in potential theory because Iαμ is then a (continuous) subharmonic function off the support of μ, and is lower semicontinuous on all of Rn.
Consideration of the Fourier transform reveals that the Riesz potential is a Fourier multiplier.
In fact, one has
and so, by the convolution theorem,
The Riesz potentials satisfy the following semigroup property on, for instance, rapidly decreasing continuous functions
provided
Furthermore, if , then
One also has, for this class of functions,
See also
Bessel potential
Fractional integration
Sobolev space
Notes
References
.
Fractional calculus
Partial differential equations
Potential theory
Singular integrals | Riesz potential | [
"Mathematics"
] | 435 | [
"Functions and mappings",
"Calculus",
"Mathematical objects",
"Potential theory",
"Mathematical relations",
"Fractional calculus"
] |
6,781,215 | https://en.wikipedia.org/wiki/Network%20search%20engine | Computer networks are connected together to form larger networks such as campus networks, corporate networks, or the Internet. Routers are network devices that may be used to connect these networks (e.g., a home network connected to the network of an Internet service provider). When a router interconnects many networks or handles much network traffic, it may become a bottleneck and cause network congestion (i.e., traffic loss).
A number of techniques have been developed to prevent such problems. One of them is the network search engine (NSE), also known as network search element. This special-purpose device helps a router perform one of its core and repeated functions very fast: address lookup. Besides routing, NSE-based address lookup is also used to keep track of network service usage for billing purposes, or to look up patterns of information in the data passing through the network for security reasons .
Network search engines are often available as ASIC chips to be interfaced with the network processor of the router. Content-addressable memory and Trie are two techniques commonly used when implementing NSEs.
References
IDT Next Generation Search Engines.
Integrated circuits
Networking hardware | Network search engine | [
"Technology",
"Engineering"
] | 243 | [
"Computer engineering",
"Computer network stubs",
"Computer networks engineering",
"Networking hardware",
"Computing stubs",
"Integrated circuits"
] |
6,781,451 | https://en.wikipedia.org/wiki/Analysis%20effort%20method | The analysis effort method is a method for estimating the duration of software engineering projects. It is best suited to producing initial estimates for the length of a job based on a known time duration for preparing a specification. Inputs to the method are numeric factors which indicate Size (S), Familiarity (F) and Complexity (C). These, with a duration for preparing the software specification can be used in a look up table (which contains factors based on previous experience) to determine with length of each of the following phases of the work. These being Design, Coding and Unit testing and Testing. The method does not include any times for training or project management.
This method should be used as one of a number of estimation techniques to obtain a more accurate estimate.
References
Software engineering costs | Analysis effort method | [
"Engineering"
] | 158 | [
"Software engineering",
"Software engineering stubs"
] |
6,783,082 | https://en.wikipedia.org/wiki/Ramogen | The term ramogen refers to a biological factor, typically a growth factor or other protein, that causes a developing biological cell or tissue to branch in a tree-like manner. Ramogenic molecules are branch promoting molecules found throughout the human body,.
Brief History
The term was first coined (from the Latin ramus = branch and the Greek genesis = creation) in an article about kidney development by Davies and Davey (Pediatr Nephrol. 1999 Aug;13(6):535-41). In the article, Davies and Davy describe the existence of "ramogens" in the kidney as glial cell line-derived neurotrophic factors, neurturin and persephin. The term has now passed into general use in the technical literature concerned with branching of biological structures.
Function
A ramogen is a biochemical signal that enables the creation of a physiological branch. The signal can be in the form of a growth factor or a hormone that makes a tube branch. One specific example would be the hormone that forms the simple tube through which the mammary glands begin to form causing the formation of a highly branched “tree” of milk ducts in females.
Types of Ramogens
Mesenchyme-derived ramogens are found throughout the body and serve as chemoattractants to branching tissues.
An example of how this works is found through a study on a bead soaked in the renal ramogen GDNF. When this ramogen was placed next to a kidney sample in culture, the nearby uteric parts branch and grow toward it.
Another example of a ramogen in use was found in the lungs. The existence of Sprouty2 in the body is demonstrated in response to the signaling of the ramogen FGF10, serving as an inhibitor of branching.
The following table is a list of Key Ramogens in Branching Organs of a mouse species.
Studies involving Ramogens
The physiological capabilities of ramogens are still being postulated in medical studies involving kidney functions on mice.
In development maturing nephrons and stroma in the body may cease to produce ramogens and may begin to secrete anti-ramogenic factors, such as Bmp2 and Tgfβ.
The pattern of branching and the rate of cell proliferation can contribute to the shape of different organs. As such, the use of the glial-cell-line neurotrophic factor (GDNF) has been found to contribute to uterine tissues.
The implication of this is that the introduction of ramogens to the body can cause cell repair through the creation of side branches introduced through ramogenic signals in the body ).
This is evidenced through studies demonstrating that uterine stalks were capable of forming new tips if provided with fresh mesenchyme or with a Matrigel artificially loaded with ramogens, such as GDNF and FGF1. The ramogens used in this study were manufactured with fresh mesenchyme.
Biochemistry | Ramogen | [
"Chemistry",
"Biology"
] | 617 | [
"Biochemistry",
"nan"
] |
24,349,187 | https://en.wikipedia.org/wiki/BioTapestry | BioTapestry is an open source software application for modeling and visualizing gene regulatory networks (GRNs).
History
BioTapestry was created at the Institute of Systems Biology in Seattle, in collaboration with the Davidson Lab at the California Institute of Technology. The project was initiated to support the ongoing development of the model of the GRN regulating the development of the endomesoderm in the sea urchin Strongylocentrotus purpuratus. BioTapestry was initially made public in late 2003 as a web-based, read-only interactive viewer for the sea urchin network, with the first fully functional editor released in August 2004 (v0.94.1). The current version, 7.0.0, was released in September 2014.
Development
Development work on BioTapestry is ongoing. For more information about version 7.0, see the release notes page.
Usage
BioTapestry is an interactive tool for modeling and visualizing gene regulatory networks.
Interactive examples
Sea urchin endomesoderm network from the Davidson Lab.
Sea urchin ectoderm network from the Davidson Lab.
Mouse ventral neural tube specification from the McMahon Lab.
Environment And Gene Regulatory Influence Network (EGRIN) for Halobacterium salinarum NRC-1 from the Baliga Lab.
T-cell gene regulatory network from the Rothenberg Lab.
Zebrafish developmental gene regulatory network from the Yuh Lab.
Limb Morphogenesis from the Vokes Lab.
Features
Input
Gene Regulatory Networks can be drawn by hand.
Networks can be built using lists of interactions entered via dialog boxes.
Lists of interactions can be input using comma-separated-value (CSV) files.
Networks can be built using SIF files as input.
BioTapestry can accept network definitions via the Gaggle framework.
Visualization
BioTapestry uses orthogonal-directed hyperlinks and a hierarchical presentation of models.
Analysis
BioTapestry can create Systems Biology Markup Language files for a subset of networks.
Documentation
The BioTapestry home page has links to several tutorials for using the software.
See also
Gene regulatory network
Systems biology
References
External links
BioTapestry site
Systems biology
Graph drawing software
Cross-platform software
Java platform software | BioTapestry | [
"Biology"
] | 460 | [
"Systems biology"
] |
24,350,706 | https://en.wikipedia.org/wiki/Air%20displacement%20pipette | Piston-driven air displacement pipettes are a type of micropipette, which are tools to handle volumes of liquid in the microliter scale. They are more commonly used in biology and biochemistry, and less commonly in chemistry; the equipment is susceptible to damage from many organic solvents.
Operation
These pipettes operate by piston-driven air displacement. A vacuum is generated by the vertical travel of a metallic or ceramic piston within an airtight sleeve. The upward movement of the piston, driven by the depression of the plunger, creates a vacuum in the space left vacant by the piston. To fill the vacuum, air from the tip rises, which is then replaced by the liquid that is drawn up into the tip and thus available for transport and dispensing elsewhere.
Sterile technique prevents liquid from coming into contact with the pipette itself. Instead, the liquid is drawn into and dispensed from a disposable pipette tip that is discarded after transferring fluid and a new pipette tip is used for the next transfer. Depressing the tip ejector button removes the tip, that is cast off without being handled by the operator and disposed of safely in an appropriate container. This also prevents contamination of or damage to the calibrated measurement mechanism by the substances being measured.
The plunger is depressed to both draw up and dispense the liquid. Normal operation consists of depressing the plunger button to the first stop while the pipette is held in the air. The tip is then submerged in the liquid to be transported and the plunger is released in a slow and even manner. This draws the liquid up into the tip. The instrument is then moved to the desired dispensing location. The plunger is again depressed to the first stop, and then to the second stop, or 'blowout', position. This action will fully evacuate the tip and dispense the liquid. In an adjustable pipette, the volume of liquid contained in the tip is variable; it can be changed via a dial or other mechanism, depending on the model. Some pipettes include a small window which displays the currently selected volume. The plastic pipette tips are designed for aqueous solutions, and are not recommended for use with organic solvents that may dissolve the plastics of the tips or even the pipettes.
Main parts of a micropipette
Plunger button
Tip ejector button
Volume adjustment dial
Digital volume indicator
Shaft
Attachment point for a disposable tip
Models
Several different type of air displacement pipettes exist:
adjustable or fixed
volume handled
Single-channel or multi-channel or repeater
adjustable tip spacing
conical tips or cylindrical tips
standard or locking
manual or electronic
manufacturer
Adjustable or fixed volume
Micropipettes can take a minimum volume of 0.2 μL and maximum volume of 10,000 μL (10 mL). They thus are used for smaller-scale transfers than equipment such as graduated pipettes, which come in 5, 10, 25 and 50 mL volumes.
The most common type of pipettes can be set to a certain volume within its operational range and are called adjustable. These pipettes commonly have a label with their volume range like "10–100 μL". These limits are indeed the limits as overwinding these limits would result in damage of the pipetting system. The fixed volume pipette cannot be changed. As there are less moving parts, the mechanism is less complex, resulting in more accurate volume measurement.
In 1972, several people of the University of Wisconsin–Madison (mainly Warren Gilson and Henry Lardy) enhanced the fixed-volume pipette, developing the pipette with a variable volume. Warren Gilson founded Gilson Inc. based on this invention.
Volume
For optimal usage, every pipette supplier offers a broad range of different capacities, commonly including 2, 10, 20, 100, 200, and 1000 μL capacity pipettes. A small volume range of a pipette like 10–100 μL results in a much higher accuracy than a broad range from 0.1–1,000 μL per pipette.
With regard to the volume transferred, the smallest pipette that can handle the required volume should be selected. This is important because accuracy decreases when the set volume is close to the pipette's minimum capacity. Generally, a pipette has optimal accuracy from 35-100% of its nominal volume, and should not be used below 10% of that volume. For example, using a 1,000 μl pipette for 50 μl of liquid is not ideal, and using a 100 μl pipette will give better results.
Other factors like tip angle and immersion depth may also impact accuracy substantially. Other methods may be necessary, like reverse pipetting for viscous liquids, or saturating the air inside a pipette with liquid vapor by pipetting up and down multiple times to "prime" the pipette before actually transferring liquid when pipetting a volatile liquid.
Tips
For the pipetting process there are two components necessary: The pipette and disposable tips.
The tips are plastic-made tools for single-use. In general, they are made of Polypropylene.
Depending on the size of the pipette, the user needs specific tip sizes like: 10 μL, 100 μL, 200 μL, 1,000 μL, other non-standard sizes, such as 5,000 μL (5 mL) or 10,000 μL (10 mL).
The majority of tips have a color code for easy spotting like natural (colorless) for low volumes (0.1–10 μL), yellow (10–100 μL), or blue (100–1,000 μL). The corresponding pipette has the same color code, printed on the pipette.
For special applications, there are filter-tips available. These tips have a little piece of foam plastic in the upper conus to prevent sample aerosols contaminating the pipette.
In general, all tips are stored in 8 × 12 boxes for 96 pieces in an upright position. The spacing of tips in these boxes is usually standardised for multichannel pipette compatibility from a number of different suppliers.
Two major tip systems exist, called conical or cylindrical, depending on the shape of the contact point of the pipettes and the tip.
Single-channel and multi-channel pipettes
Depending on the number of pistons in a pipette, there is a differentiation between single-channel pipettes and multi-channel pipettes. For manual high-throughput applications like filling up a 96-well microtiter plate most researchers prefer a multi-channel pipette. Instead of handling well by well, a row of 8 wells can be handled in parallel as this type of pipette has 8 pistons in parallel.
Adjustable tip spacing pipettes
Some manufacturers offer adjustable tip spacing pipettes. These allow to transfer multiple samples in parallel between different labware formats.
Electronic pipettes
To improve the ergonomics of pipettes by reducing the necessary force, electronic pipettes were developed. The manual movement of the piston is replaced by a small electric motor powered by a battery. Whereas manual pipettes need a movement of the thumb (up to 3 cm), electronic pipettes have a main button. The programming of the pipette is generally done by a control wheel and some further buttons. All settings are displayed on a small display. Electronic pipettes can decrease the risk of RSI-type injuries.
Repeaters
Repeaters are specialized pipettes, optimized for repeated working steps like dispensing several times a specific volume like 20 μL from a single aspiration of a larger volume. In general, they have specific tips which do not fit on normal pipettes. Some electronic pipettes are able to perform this function using standard tips.
Locking mechanism
Some air displacement pipettes can additionally feature a locking mechanism (referred to as "locking pipettes") to allow better changing of volume yet preserving accuracy. By locking the set volume while performing several identical pipetting actions, accidental changes to the pipette volume setting are avoided. The lock mechanism is typically a mechanical toggle close to the pipette setting controls that interferes with the setting mechanism to prevent movement. Some pipettes, however, feature dials for setting the individual volume digits that can only be adjusted when unlocked by depressing and twisting the plunger.
Calibration
For sustained accuracy and consistent and repeatable operation, pipettes should be calibrated at periodic intervals. These intervals vary depending on several factors:
The skill and training of the operators. Skilled operators tend to operate the instrument more correctly and make fewer accuracy-robbing mistakes.
The liquid dispensed by the pipette. Corrosive and volatile liquids tend to emit vapors which ascend into the pipette shaft even under proper operating conditions and may corrode the metal piston and springs, or the seals and o-rings that provide an air-tight seal between the piston and the surrounding sleeve.
Proper and careful handling. Pipettes that are frequently dropped, are subjected to careless handling or horseplay, or that are not properly stored in a vertical position, will tend to degrade in accuracy over time.
The accuracy required by the instrument. Applications requiring maximum accuracy also demand more frequent calibration. Instruments used for purely research applications or in educational settings generally require less frequent calibration.
Under average conditions, most pipettes can be calibrated semi-annually (every six months) and provide satisfactory performance. Institutions that are regulated by the Food and Drug Administration's GMP/GLP regulations generally benefit from quarterly calibration, or every three months. Critical applications may require monthly service, while research and educational institutions may need only annual service. These are general guidelines and any decision on the appropriate calibration interval should be made carefully and include considerations of the pipette in question (some are more reliable than others), the conditions under which the pipette is used, and the operators who use it.
Calibration is generally accomplished through means of gravimetric analysis. This entails dispensing samples of distilled water into a receiving vessel perched atop a precision analytical balance. The density of water is a well-known constant, and thus the mass of the dispensed sample provides an accurate indication of the volume dispensed. Relative humidity, ambient temperature, and barometric pressure are factors in the accuracy of the measurement, and are usually combined in a complex formula and computed as the Z-factor. This Z-factor is then used to modify the raw mass data output of the balance and provide an adjusted and more accurate measurement.
The colorimetric method uses precise concentrations of colored water to affect the measurement and determine the volume dispensed. A spectrophotometer is used to measure the color difference before and after aspiration of the sample, providing a very accurate reading. This method is more expensive than the more common gravimetric method, given the cost of the colored reagents, and is recommended when optimal accuracy is required. It is also recommended for extremely low-volume pipette calibration, in the 2 microliter range, because the inherent uncertainties of the gravimetric method, performed with standard laboratory balances, becomes excessive. Properly calibrated microbalances, capable of reading in the range of micrograms (10−6 g) can also be used effectively for gravimetric analysis of low-volume micropipettes, but only if environmental conditions are under strict control. Six-place balances and environmental controls dramatically increase the cost of such calibrations.
Additional images
References
Laboratory equipment
Microbiology equipment
Molecular biology laboratory equipment
Volumetric instruments | Air displacement pipette | [
"Chemistry",
"Technology",
"Engineering",
"Biology"
] | 2,388 | [
"Measuring instruments",
"Microbiology equipment",
"Molecular biology laboratory equipment",
"Molecular biology techniques",
"Volumetric instruments"
] |
24,355,140 | https://en.wikipedia.org/wiki/Multiaxis%20machining | Multiaxis machining is a manufacturing process that involves tools that move in 4 or more directions and are used to manufacture parts out of metal or other materials by milling away excess material, by water jet cutting or by laser cutting. This type of machining was originally performed mechanically on large complex machines. These machines operated on 4, 5, 6, and even 12 axes which were controlled individually via levers that rested on cam plates. The cam plates offered the ability to control the tooling device, the table in which the part is secured, as well as rotating the tooling or part within the machine. Due to the machines size and complexity it took extensive amounts of time to set them up for production. Once computer numerically controlled machining was introduced it provided a faster, more efficient method for machining complex parts.
Typical CNC tools support translation in 3 axis; multiaxis machines also support rotation around one or multiple axis. 5-axis machines are commonly used in industry in which the workpiece is translated linearly along three axes (typically x, y, and z) and the tooling spindle is capable of rotation about an addition 2 axes.
There are now many CAM (computer aided manufacturing) software systems available to support multiaxis machining including software that can automatically convert 3-axis toolpaths into 5-axis toolpaths. Prior to the advancement of Computer Aided Manufacturing, transferring information from design to production often required extensive manual labor, generating errors and resulting in wasted time and material.
There are three main components to multiaxis machines:
The machines physical capabilities i.e. torque, spindle speed, axis orientation/operation.
The CNC drive system, the components that move the machine. This includes servo-motors, rapid traverse systems, ball screws, and how positioning is monitored.
The CNC controller, this is how data is transferred/stored within machine, and input data is processed and executed.
Multiaxis machines offer several improvements over other CNC tools, at the cost of increased complexity and price of the machine:
The amount of human labor is reduced, if the piece would otherwise have to be turned manually during the machining.
A better surface finish can be obtained by moving the tool tangentially about the surface (as opposed to moving the workpiece around the spindle).
More complex parts can be manufactured, particularly parts with curved holes.
Increased tool life due to the ability to achieve optimal angles between the tool and machining surface.
Higher quality parts. What once required multiple setups now can be executed in only a few if not one, reducing steps and decreasing the opportunity for error.
The number of axes for multiaxis machines varies from 4 to 9. Each axis of movement is implemented either by moving the table (into which the workpiece is attached), or by moving the tool. The actual configuration of axes varies, therefore machines with the same number of axes can differ in the movements that can be performed.
Applications
Multiaxis CNC machines are used in many industries including:
Aerospace industry: Multiaxis machines are used in the manufacturing of aircraft parts, which allow for complex parts to be made efficiently.
Automotive industry: Multiaxis CNC machines create engine housings, rims and headlights.
Furniture industry: CNC lathes mass-produce wooden table legs as well as most other components.
Medical industry: Multiaxis CNC machines create custom hip replacements, dental implants, and prosthetic limbs.
Multiaxis machining is also commonly used for rapid prototyping as it can create strong, high quality models out of metal, plastic, and wood while still being easily programmable.
Computer-aided manufacturing (CAM) software
CAM software automates the process of converting 3D models into tool paths, the route the multiaxis machine takes to mill a part (Fig. 1). This software takes into account the different parameters of the tool head (in the case of a CNC router, this would be the bit size), dimensions of the blank, and any constraints the machine may have. The tool paths for multiple passes can be generated to produce a higher level of detail on the parts. The first few passes remove large amounts of material, while the final, most important pass creates the surface finish. In the case of the CNC lathe, the CAM software will optimize the tool path to have the central axis of the part align with the rotary of the lathe. Once the tool paths have been generated, the CAM software will convert them into G-code, allowing the CNC machine to begin milling.
CAM software is currently the limiting factor in the capabilities of a multiaxis machine with ongoing development. Recent breakthroughs in this space include:
Topology optimization, an algorithm that refines 3D models to be more efficient and cost-effective on CNC machines.
Automated recognition of 3D model features, which can simplify tool path generation by identifying instructions for the machine to follow from the features of the 3D model.
See also
Machine tool
Milling machine
Numerical control
CNC pocket milling
References
Machining
Computer-aided engineering | Multiaxis machining | [
"Engineering"
] | 1,036 | [
"Construction",
"Industrial engineering",
"Computer-aided engineering"
] |
24,357,501 | https://en.wikipedia.org/wiki/Project%20Icarus%20%28photography%29 | Project Icarus was a project at the Massachusetts Institute of Technology (MIT) in 2009.
Project
Project Icarus was an experiment in 2009 to launch a camera into the stratosphere undertaken by MIT students, Justin Lee and Oliver Yeh.
The launch vehicle consisted of a weather balloon filled with helium and a styrofoam beer cooler that hung underneath the balloon. Inside the cooler was a Canon A470 compact camera, hacked using the Canon Hacker's Development Kit open-source firmware to shoot pictures in five-second intervals. To keep the temperature of the batteries high enough for the camera to work it was heated by instant hand warmers. In order to keep track of the vehicle's location a prepaid GPS-equipped cellphone was included.
The launch occurred in Sturbridge, Massachusetts at 11:45 am on September 2, 2009. The device traveled to around before free falling back to Earth. It was eventually recovered in Worcester, Massachusetts. The mission was a success, and the pictures were retrieved. The project cost only $148.
"We looked at these photographs and thought wow, these are beautiful—this is artwork," said Lee. "This inspired us to sit down and really think deep about the relationships between science and art."
According to the Federal Aviation Administration, the launch was legal because the payload was under . However, they advised anyone interested in a future launch to contact the federal agency beforehand.
The MIT students were not the first to take pictures of the Earth using helium balloons, but this experiment is noteworthy because it used inexpensive consumer products and did not require specialized hardware.
See also
1566 Icarus
Icarus
Project Icarus (interstellar)
References
External links
Project Icarus
Spaceflight
2009 in science
2009 in the United States
2009 in Massachusetts
History of the Massachusetts Institute of Technology
Sturbridge, Massachusetts
History of Worcester, Massachusetts | Project Icarus (photography) | [
"Astronomy"
] | 378 | [
"Spaceflight",
"Outer space"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.