id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
18,658,586 | https://en.wikipedia.org/wiki/Wildlife%20of%20Pakistan | The wildlife of Pakistan comprises a diverse flora and fauna in a wide range of habitats from sea level to high elevation areas in the mountains, including 195 mammal, 668 bird species and more than 5000 species of Invertebrates. This diverse composition of the country's fauna is associated with its location in the transitional zone between two major zoogeographical regions, the Palearctic, and the Oriental. The northern regions of Pakistan, which include Khyber Pakhtunkhwa and Gilgit Baltistan include portions of two biodiversity hotspot, Mountains of Central Asia and Himalayas.
Habitats
Northern highlands and plains
The northern highlands include lower elevation areas of Potohar and Pakistan administered Jammu and Kashmir regions and higher elevation areas embracing the foothills of Himalayan, Karakorum and Hindukush mountain ranges. These areas provide an excellent habitat for wildlife in the form of alpine grazing lands, sub-alpine scrub and temperate forests.
Some of the wildlife species found in northern mountainous areas and Pothohar Plateau include the bharal, Eurasian lynx, Himalayan goral, Marco Polo sheep, marmot (in Deosai National Park) and yellow-throated marten and birds species of chukar partridge, Eurasian eagle-owl, Himalayan monal and Himalayan snowcock and amphibian species of Himalayan toad and Muree Hills frog.
Threatened species include the snow leopard, Himalayan brown bear, Indian wolf, rhesus macaque, markhor, Siberian ibex and white-bellied musk deer.
Bird species present are cheer pheasant, peregrine falcon and western tragopan.
Indus plains and deserts of Sindh
The Indus River and its numerous eastern tributaries of Chenab, Ravi, Sutlej, Jhelum, Beas are spread across most of Punjab. The plain of the Indus continues towards and occupies most of western Sindh. The plains have many fluvial landforms (including bars, flood plains, levees, meanders and oxbows) that support various natural biomes including tropical and subtropical dry and moist broadleaf forestry as well as tropical and xeric shrublands (deserts of Thal and Cholistan in Punjab, Nara and Thar in Sindh). The banks and stream beds of the river system also support riparian woodlands that exhibit the tree species of kikar, mulberry and sheesham. Such geographical landforms accompanied by an excellent system of monsoon climate provides an excellent ground for diversity of flora and fauna species. However, the plains are equally appealing to humans for agricultural goals and development of civilization.
Some of the non-threatened mammal species includes the nilgai, red fox, golden jackal and wild boar, bird species of Alexandrine parakeet, barn owl, black kite, myna, hoopoe, Indian peafowl, Indian leopard, red-vented bulbul, rock pigeon, shelduck and shikra, reptile species of Indian cobra, Indian star tortoise, Sindh krait and yellow monitor and amphibian species of Indus Valley bullfrog and Indus Valley toad. Some of the threatened mammal species include the, axis deer, blackbuck (in captivity; extinct in wild), hog deer, dholes, Indian pangolin, Punjab urial and Sindh ibex, bird species of white-backed vulture and reptile species of black pond turtle and gharial. Grey partridge is one of the few birds that can be found in the Cholistan desert.
Mugger crocodiles inhabit the Deh Akro-II Desert Wetland Complex, Nara Desert Wildlife Sanctuary, Chotiari Reservoir and Haleji Lake.
Western highlands, plains and deserts
The western region of Pakistan, most of which is enveloped in Balochistan province, has a complex geography. In mountainous highlands, habitat varies from conifer forests of deodar in Waziristan and juniper in Ziarat. Numerous mountain ranges surround the huge lowland plains of Balochistani Plateau, through which a rather intricate meshwork of seasonal rivers and salt pans is spread. Deserts are also present, showing xeric shrubland vegetation in the region. Date palms and ephedra are common flora varieties in the desert.
The Balochistan leopard has been described from this region. Some of the mammal species include the caracal, Balochistan dormouse, Blanford's fox, dromedary camel, goitered gazelle, Indian crested porcupine, long-eared hedgehog, markhor, ratel, and striped hyena, bird species of bearded vulture, houbara bustard and merlin, reptile species of leopard gecko and saw-scaled viper and amphibian species of Balochistan toad. The Pallas's cat lives in the rocky slopes.
Wetlands, coastal regions and marine life
There are a number of protected wetlands (under Ramsar Convention) in Pakistan. These include Tanda Dam and Thanedar Wala in Khyber Pakhtunkhwa, Chashma Barrage, Taunsa Barrage and Uchhali Complex in Punjab, Haleji Lake, Hub Dam and Kinjhar Lake in Sindh, Miani Hor in Balochistan. The wetlands are a habitat for migratory birds such as Dalmatian pelicans and demoiselle crane as well as predatory species of osprey, common kingfisher, fishing cat and leopard cat near the coast line. Chashma and Taunsa Barrage Dolphin Sanctuary protects the threatened Indus river dolphins which live in freshwater.
The east half of the coast of Pakistan is located in the south of Sindh province, which features the Indus River Delta and the coast of the Great Rann of Kutch. The largest saltwater wetland in Pakistan is the Indus River Delta. Unlike many other river deltas, it consists of clay soil and is very swampy. The west coast of the Great Rann of Kutch, east to the Indus River Delta and below the Tharparkar desert, is one of the few places where greater flamingos come to breed. It is also a habitat for endangered species of lesser florican. Unlike the Indus River Delta, this part of the coast is not as swampy and exhibits shrubland vegetation of rather dry thorny shrubs as well as marsh grasses of Apluda and Cenchrus.
The vegetation of the Indus River Delta is mainly represented by various mangrove species and bamboo species. The Indus River Delta-Arabian Sea mangroves is a focused ecoregion of WWF. Nearly 95% of the mangroves located in the Indus River Delta are of the species Avicennia marina. Very small patches of Ceriops roxburghiana and Aegiceras corniculatum are found. These provide nesting grounds for common snakehead, giant snakehead, Indus baril and many species of catfish like rita. The hilsa swims up from the Arabian Sea to spawn in freshwater. Species that are important to people as food, such as the golden mahseer and large freshwater shrimp (Macrobrachium species), are part of the abundant aquatic life.
The west half of the Pakistan coast is in the south of Balochistan province. It is also called the Makran coast and exhibits protected sites such as Astola Island and Hingol National Park. The three major mangrove plantations of Balochistan coast are Miani Hor, Kalmat Khor and Gwatar Bay. Miani Hor is a swampy lagoon on the coast in the Lasbela district where the climate is very arid. The sources of fresh water for Miani Hor are the seasonal river of Porali. The nearest river to the other lagoon, Kalmat Khor, is the Basol River. Gawatar, the third site, is an open bay with a mouth almost as wide as its length. Its freshwater source is the Dasht River, the largest seasonal river of Baluchistan. All three bays support mainly A. marina species of mangrove. Pakistan also plans to rehabilitate mangrove-degraded areas at Sonmiani and Jiwani in Balochistan.
Along the shores of Astola and Ormara beaches of Balochistan and Hawke's Bay and Sandspit beaches of Sindh are nesting sites for five endangered species of sea turtles: green sea, loggerhead, hawksbill, olive ridley and leatherback. Sea snakes such as yellow-bellied sea snake are also found in the pelagic zone of the sea. The wetlands of Pakistan are also a home to the mugger crocodile who prefer freshwater habitat.
Extinct
Regionally extinct species in Pakistan include:
Indian rhinoceros (since the 17th century)
Asian elephant
Asiatic lion
Asiatic cheetah
Bengal tiger
Barasingha
Indian wild ass (since the installation of a fenced border at Sir Creek between India and Pakistan).
Kashmir stag (possibly extinct).
Regional departments
Balochistan Forests & Wildlife Department
Climate Change, Forestry, Environment & Wildlife Department, Khyber Pakhtunkhwa
Forest, Wildlife & Environment Department, Gilgit-Baltistan
Forestry, Wildlife and Fisheries department, Punjab
Sindh Wildlife Department
See also
List of mammals of Pakistan
List of reptiles of South Asia
Wildflowers of Pakistan
Invertebrates of Pakistan
Non-marine molluscs of Pakistan
Butterflies of Pakistan
Spiders of Pakistan
Vertebrates of Pakistan
Fishes of Pakistan
Amphibians of Pakistan
Reptiles of Pakistan
Birds of Pakistan (Birds of Islamabad)
Mammals of Pakistan
References
External links
Pakistan Wildlife Foundation
World Wide Fund for Nature - Pakistan
Wildlife of Pakistan
Forest, Wildlife & Fisheries Department - Government of the Punjab
List of Environmental Protection Agencies in Pakistan.
Pakistan | Wildlife of Pakistan | Biology | 1,941 |
3,957,132 | https://en.wikipedia.org/wiki/List%20of%20uniform%20polyhedra%20by%20spherical%20triangle | There are many relations among the uniform polyhedra. This List of uniform polyhedra by spherical triangle groups them by the Wythoff symbol.
Key
The vertex figure can be discovered by considering the Wythoff symbol:
p|q r - 2p edges, alternating q-gons and r-gons. Vertex figure (q.r)p.
p|q 2 - p edges, q-gons (here r=2 so the r-gons are degenerate lines).
2|q r - 4 edges, alternating q-gons and r-gons
q r|p - 4 edges, 2p-gons, q-gons, 2p-gons r-gons, Vertex figure 2p.q.2p.r.
q 2|p - 3 edges, 2p-gons, q-gons, 2p-gons, Vertex figure 2p.q.2p.
p q r|- 3 edges, 2p-gons, 2q-gons, 2r-gons, vertex figure 2p.2q.2r
Convex
Non-convex
a b 2
3 3 2
Group
4 3 2
Group
5 3 2
Group
5 5 2
Group
a b 3
3 3 3
Group
4 3 3
Group
5 3 3
Group
4 4 3
Group
5 5 3
Group
a b 5
5 5 5
Group
Uniform polyhedra | List of uniform polyhedra by spherical triangle | Physics | 293 |
207,079 | https://en.wikipedia.org/wiki/Gamma%20distribution | In probability theory and statistics, the gamma distribution is a versatile two-parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-squared distribution are special cases of the gamma distribution. There are two equivalent parameterizations in common use:
With a shape parameter and a scale parameter
With a shape parameter and a rate parameter
In each of these forms, both parameters are positive real numbers.
The distribution has important applications in various fields, including econometrics, Bayesian statistics, life testing. In econometrics, the (α, θ) parameterization is common for modeling waiting times, such as the time until death, where it often takes the form of an Erlang distribution for integer α values. Bayesian statisticians prefer the (α,λ) parameterization, utilizing the gamma distribution as a conjugate prior for several inverse scale parameters, facilitating analytical tractability in posterior distribution computations. The probability density and cumulative distribution functions of the gamma distribution vary based on the chosen parameterization, both offering insights into the behavior of gamma-distributed random variables. The gamma distribution is integral to modeling a range of phenomena due to its flexible shape, which can capture various statistical distributions, including the exponential and chi-squared distributions under specific conditions. Its mathematical properties, such as mean, variance, skewness, and higher moments, provide a toolset for statistical analysis and inference. Practical applications of the distribution span several disciplines, underscoring its importance in theoretical and applied statistics.
The gamma distribution is the maximum entropy probability distribution (both with respect to a uniform base measure and a base measure) for a random variable for which is fixed and greater than zero, and is fixed ( is the digamma function).
Definitions
The parameterization with and appears to be more common in econometrics and other applied fields, where the gamma distribution is frequently used to model waiting times. For instance, in life testing, the waiting time until death is a random variable that is frequently modeled with a gamma distribution. See Hogg and Craig for an explicit motivation.
The parameterization with and is more common in Bayesian statistics, where the gamma distribution is used as a conjugate prior distribution for various types of inverse scale (rate) parameters, such as the of an exponential distribution or a Poisson distribution – or for that matter, the of the gamma distribution itself. The closely related inverse-gamma distribution is used as a conjugate prior for scale parameters, such as the variance of a normal distribution.
If is a positive integer, then the distribution represents an Erlang distribution; i.e., the sum of independent exponentially distributed random variables, each of which has a mean of .
Characterization using shape α and rate λ
The gamma distribution can be parameterized in terms of a shape parameter and an inverse scale parameter , called a rate parameter. A random variable that is gamma-distributed with shape and rate is denoted
The corresponding probability density function in the shape-rate parameterization is
where is the gamma function.
For all positive integers, .
The cumulative distribution function is the regularized gamma function:
where is the lower incomplete gamma function.
If is a positive integer (i.e., the distribution is an Erlang distribution), the cumulative distribution function has the following series expansion:
Characterization using shape α and scale θ
A random variable that is gamma-distributed with shape and scale is denoted by
The probability density function using the shape-scale parametrization is
Here is the gamma function evaluated at .
The cumulative distribution function is the regularized gamma function:
where is the lower incomplete gamma function.
It can also be expressed as follows, if is a positive integer (i.e., the distribution is an Erlang distribution):
Both parametrizations are common because either can be more convenient depending on the situation.
Properties
Mean and variance
The mean of gamma distribution is given by the product of its shape and scale parameters:
The variance is:
The square root of the inverse shape parameter gives the coefficient of variation:
Skewness
The skewness of the gamma distribution only depends on its shape parameter, , and it is equal to
Higher moments
The -th raw moment is given by:
Median approximations and bounds
Unlike the mode and the mean, which have readily calculable formulas based on the parameters, the median does not have a closed-form equation. The median for this distribution is the value such that
A rigorous treatment of the problem of determining an asymptotic expansion and bounds for the median of the gamma distribution was handled first by Chen and Rubin, who proved that (for )
where is the mean and is the median of the distribution. For other values of the scale parameter, the mean scales to , and the median bounds and approximations would be similarly scaled by .
K. P. Choi found the first five terms in a Laurent series asymptotic approximation of the median by comparing the median to Ramanujan's function. Berg and Pedersen found more terms:
Partial sums of these series are good approximations for high enough ; they are not plotted in the figure, which is focused on the low- region that is less well approximated.
Berg and Pedersen also proved many properties of the median, showing that it is a convex function of , and that the asymptotic behavior near is (where is the Euler–Mascheroni constant), and that for all the median is bounded by .
A closer linear upper bound, for only, was provided in 2021 by Gaunt and Merkle, relying on the Berg and Pedersen result that the slope of is everywhere less than 1:
for (with equality at )
which can be extended to a bound for all by taking the max with the chord shown in the figure, since the median was proved convex.
An approximation to the median that is asymptotically accurate at high and reasonable down to or a bit lower follows from the Wilson–Hilferty transformation:
which goes negative for .
In 2021, Lyon proposed several approximations of the form . He conjectured values of and for which this approximation is an asymptotically tight upper or lower bound for all . In particular, he proposed these closed-form bounds, which he proved in 2023:
is a lower bound, asymptotically tight as
is an upper bound, asymptotically tight as
Lyon also showed (informally in 2021, rigorously in 2023) two other lower bounds that are not closed-form expressions, including this one involving the gamma function, based on solving the integral expression substituting 1 for :
(approaching equality as )
and the tangent line at where the derivative was found to be :
(with equality at )
where Ei is the exponential integral.
Additionally, he showed that interpolations between bounds could provide excellent approximations or tighter bounds to the median, including an approximation that is exact at (where ) and has a maximum relative error less than 0.6%. Interpolated approximations and bounds are all of the form
where is an interpolating function running monotonially from 0 at low to 1 at high , approximating an ideal, or exact, interpolator :
For the simplest interpolating function considered, a first-order rational function
the tightest lower bound has
and the tightest upper bound has
The interpolated bounds are plotted (mostly inside the yellow region) in the log–log plot shown. Even tighter bounds are available using different interpolating functions, but not usually with closed-form parameters like these.
Summation
If has a distribution for (i.e., all distributions have the same scale parameter ), then
provided all are independent.
For the cases where the are independent but have different scale parameters, see Mathai or Moschopoulos.
The gamma distribution exhibits infinite divisibility.
Scaling
If
then, for any ,
by moment generating functions,
or equivalently, if
(shape-rate parameterization)
Indeed, we know that if is an exponential r.v. with rate , then is an exponential r.v. with rate ; the same thing is valid with Gamma variates (and this can be checked using the moment-generating function, see, e.g.,these notes, 10.4-(ii)): multiplication by a positive constant divides the rate (or, equivalently, multiplies the scale).
Exponential family
The gamma distribution is a two-parameter exponential family with natural parameters and (equivalently, and ), and natural statistics and .
If the shape parameter is held fixed, the resulting one-parameter family of distributions is a natural exponential family.
Logarithmic expectation and variance
One can show that
or equivalently,
where is the digamma function. Likewise,
where is the trigamma function.
This can be derived using the exponential family formula for the moment generating function of the sufficient statistic, because one of the sufficient statistics of the gamma distribution is .
Information entropy
The information entropy is
In the , parameterization, the information entropy is given by
Kullback–Leibler divergence
The Kullback–Leibler divergence (KL-divergence), of ("true" distribution) from ("approximating" distribution) is given by
Written using the , parameterization, the KL-divergence of from is given by
Laplace transform
The Laplace transform of the gamma PDF, which is the moment-generating function of the gamma distribution, is
(where is a random variable with that distribution).
Related distributions
General
Let be independent and identically distributed random variables following an exponential distribution with rate parameter λ, then where n is the shape parameter and is the rate, and .
If (in the shape–rate parametrization), then has an exponential distribution with rate parameter . In the shape-scale parametrization, has an exponential distribution with rate parameter .
If (in the shape–scale parametrization), then is identical to , the chi-squared distribution with degrees of freedom. Conversely, if and is a positive constant, then .
If , one obtains the Schulz-Zimm distribution, which is most prominently used to model polymer chain lengths.
If is an integer, the gamma distribution is an Erlang distribution and is the probability distribution of the waiting time until the -th "arrival" in a one-dimensional Poisson process with intensity . If
then
If has a Maxwell–Boltzmann distribution with parameter , then
If , then follows an exponential-gamma (abbreviated exp-gamma) distribution. It is sometimes referred to as the log-gamma distribution. Formulas for its mean and variance are in the section #Logarithmic expectation and variance.
If , then follows a generalized gamma distribution with parameters , , and .
More generally, if , then for follows a generalized gamma distribution with parameters , , and .
If with shape and scale , then (see Inverse-gamma distribution for derivation).
Parametrization 1: If are independent, then , or equivalently,
Parametrization 2: If are independent, then , or equivalently,
If and are independently distributed, then has a beta distribution with parameters and , and is independent of , which is -distributed.
If and , then converges in distribution to defined under parametrization 2.
If are independently distributed, then the vector (, where , follows a Dirichlet distribution with parameters .
For large the gamma distribution converges to normal distribution with mean and variance .
The gamma distribution is the conjugate prior for the precision of the normal distribution with known mean.
The matrix gamma distribution and the Wishart distribution are multivariate generalizations of the gamma distribution (samples are positive-definite matrices rather than positive real numbers).
The gamma distribution is a special case of the generalized gamma distribution, the generalized integer gamma distribution, and the generalized inverse Gaussian distribution.
Among the discrete distributions, the negative binomial distribution is sometimes considered the discrete analog of the gamma distribution.
Tweedie distributions – the gamma distribution is a member of the family of Tweedie exponential dispersion models.
Modified Half-normal distribution – the Gamma distribution is a member of the family of Modified half-normal distribution. The corresponding density is , where denotes the Fox–Wright Psi function.
For the shape-scale parameterization , if the scale parameter where denotes the Inverse-gamma distribution, then the marginal distribution where denotes the Beta prime distribution.
Compound gamma
If the shape parameter of the gamma distribution is known, but the inverse-scale parameter is unknown, then a gamma distribution for the inverse scale forms a conjugate prior. The compound distribution, which results from integrating out the inverse scale, has a closed-form solution known as the compound gamma distribution.
If, instead, the shape parameter is known but the mean is unknown, with the prior of the mean being given by another gamma distribution, then it results in K-distribution.
Weibull and stable count
The gamma distribution can be expressed as the product distribution of a Weibull distribution and a variant form of the stable count distribution.
Its shape parameter can be regarded as the inverse of Lévy's stability parameter in the stable count distribution:
where is a standard stable count distribution of shape , and is a standard Weibull distribution of shape .
Statistical inference
Parameter estimation
Maximum likelihood estimation
The likelihood function for iid observations is
from which we calculate the log-likelihood function
Finding the maximum with respect to by taking the derivative and setting it equal to zero yields the maximum likelihood estimator of the parameter, which equals the sample mean divided by the shape parameter :
Substituting this into the log-likelihood function gives
We need at least two samples: , because for , the function increases without bounds as . For , it can be verified that is strictly concave, by using inequality properties of the polygamma function. Finding the maximum with respect to by taking the derivative and setting it equal to zero yields
where is the digamma function and is the sample mean of . There is no closed-form solution for . The function is numerically very well behaved, so if a numerical solution is desired, it can be found using, for example, Newton's method. An initial value of can be found either using the method of moments, or using the approximation
If we let
then is approximately
which is within 1.5% of the correct value. An explicit form for the Newton–Raphson update of this initial guess is:
At the maximum-likelihood estimate , the expected values for and agree with the empirical averages:
Caveat for small shape parameter
For data, , that is represented in a floating point format that underflows to 0 for values smaller than , the logarithms that are needed for the maximum-likelihood estimate will cause failure if there are any underflows. If we assume the data was generated by a gamma distribution with cdf , then the probability that there is at least one underflow is:
This probability will approach 1 for small and large . For example, at , and , . A workaround is to instead have the data in logarithmic format.
In order to test an implementation of a maximum-likelihood estimator that takes logarithmic data as input, it is useful to be able to generate non-underflowing logarithms of random gamma variates, when . Following the implementation in scipy.stats.loggamma, this can be done as follows: sample and independently. Then the required logarithmic sample is , so that .
Closed-form estimators
There exist consistent closed-form estimators of and that are derived from the likelihood of the generalized gamma distribution.
The estimate for the shape is
and the estimate for the scale is
Using the sample mean of , the sample mean of , and the sample mean of the product simplifies the expressions to:
If the rate parameterization is used, the estimate of .
These estimators are not strictly maximum likelihood estimators, but are instead referred to as mixed type log-moment estimators. They have however similar efficiency as the maximum likelihood estimators.
Although these estimators are consistent, they have a small bias. A bias-corrected variant of the estimator for the scale is
A bias correction for the shape parameter is given as
Bayesian minimum mean squared error
With known and unknown , the posterior density function for theta (using the standard scale-invariant prior for ) is
Denoting
Integration with respect to can be carried out using a change of variables, revealing that is gamma-distributed with parameters , .
The moments can be computed by taking the ratio ( by )
which shows that the mean ± standard deviation estimate of the posterior distribution for is
Bayesian inference
Conjugate prior
In Bayesian inference, the gamma distribution is the conjugate prior to many likelihood distributions: the Poisson, exponential, normal (with known mean), Pareto, gamma with known shape , inverse gamma with known shape parameter, and Gompertz with known scale parameter.
The gamma distribution's conjugate prior is:
where is the normalizing constant with no closed-form solution.
The posterior distribution can be found by updating the parameters as follows:
where is the number of observations, and is the -th observation.
Occurrence and applications
Consider a sequence of events, with the waiting time for each event being an exponential distribution with rate . Then the waiting time for the -th event to occur is the gamma distribution with integer shape . This construction of the gamma distribution allows it to model a wide variety of phenomena where several sub-events, each taking time with exponential distribution, must happen in sequence for a major event to occur. Examples include the waiting time of cell-division events, number of compensatory mutations for a given mutation, waiting time until a repair is necessary for a hydraulic system, and so on.
In biophysics, the dwell time between steps of a molecular motor like ATP synthase is nearly exponential at constant ATP concentration, revealing that each step of the motor takes a single ATP hydrolysis. If there were n ATP hydrolysis events, then it would be a gamma distribution with degree n.
The gamma distribution has been used to model the size of insurance claims and rainfalls. This means that aggregate insurance claims and the amount of rainfall accumulated in a reservoir are modelled by a gamma process – much like the exponential distribution generates a Poisson process.
The gamma distribution is also used to model errors in multi-level Poisson regression models because a mixture of Poisson distributions with gamma-distributed rates has a known closed form distribution, called negative binomial.
In wireless communication, the gamma distribution is used to model the multi-path fading of signal power; see also Rayleigh distribution and Rician distribution.
In oncology, the age distribution of cancer incidence often follows the gamma distribution, wherein the shape and scale parameters predict, respectively, the number of driver events and the time interval between them.
In neuroscience, the gamma distribution is often used to describe the distribution of inter-spike intervals.
In bacterial gene expression where protein production can occur in bursts, the copy number of a given protein often follows the gamma distribution, where the shape and scale parameters are, respectively, the mean number of bursts per cell cycle and the mean number of protein molecules produced per burst.
In genomics, the gamma distribution was applied in peak calling step (i.e., in recognition of signal) in ChIP-chip and ChIP-seq data analysis.
In Bayesian statistics, the gamma distribution is widely used as a conjugate prior. It is the conjugate prior for the precision (i.e. inverse of the variance) of a normal distribution. It is also the conjugate prior for the exponential distribution.
In phylogenetics, the gamma distribution is the most commonly used approach to model among-sites rate variation when maximum likelihood, Bayesian, or distance matrix methods are used to estimate phylogenetic trees. Phylogenetic analyzes that use the gamma distribution to model rate variation estimate a single parameter from the data because they limit consideration to distributions where . This parameterization means that the mean of this distribution is 1 and the variance is . Maximum likelihood and Bayesian methods typically use a discrete approximation to the continuous gamma distribution.
Random variate generation
Given the scaling property above, it is enough to generate gamma variables with , as we can later convert to any value of with a simple division.
Suppose we wish to generate random variables from , where n is a non-negative integer and . Using the fact that a distribution is the same as an distribution, and noting the method of generating exponential variables, we conclude that if is uniformly distributed on (0, 1], then is distributed (i.e. inverse transform sampling). Now, using the "-addition" property of gamma distribution, we expand this result:
where are all uniformly distributed on (0, 1] and independent. All that is left now is to generate a variable distributed as for and apply the "-addition" property once more. This is the most difficult part.
Random generation of gamma variates is discussed in detail by Devroye, noting that none are uniformly fast for all shape parameters. For small values of the shape parameter, the algorithms are often not valid. For arbitrary values of the shape parameter, one can apply the Ahrens and Dieter modified acceptance-rejection method Algorithm GD (shape ), or transformation method when . Also see Cheng and Feast Algorithm GKM 3 or Marsaglia's squeeze method.
The following is a version of the Ahrens-Dieter acceptance–rejection method:
Generate , and as iid uniform (0, 1] variates.
If then and . Otherwise, and .
If then go to step 1.
is distributed as .
A summary of this is
where is the integer part of , is generated via the algorithm above with (the fractional part of ) and the are all independent.
While the above approach is technically correct, Devroye notes that it is linear in the value of and generally is not a good choice. Instead, he recommends using either rejection-based or table-based methods, depending on context.
For example, Marsaglia's simple transformation-rejection method relying on one normal variate and one uniform variate :
Set and .
Set .
If and return , else go back to step 2.
With generates a gamma distributed random number in time that is approximately constant with . The acceptance rate does depend on , with an acceptance rate of 0.95, 0.98, and 0.99 for α = 1, 2, and 4. For , one can use to boost to be usable with this method.
In Matlab numbers can be generated using the function gamrnd(), which uses the α, θ representation.
References
External links
ModelAssist (2017) Uses of the gamma distribution in risk modeling, including applied examples in Excel .
Engineering Statistics Handbook
Continuous distributions
Factorial and binomial topics
Conjugate prior distributions
Exponential family distributions
Infinitely divisible probability distributions
Survival analysis
Gamma and related functions | Gamma distribution | Mathematics | 4,681 |
17,879,473 | https://en.wikipedia.org/wiki/Yuwen%20Zhang | Yuwen Zhang is a Chinese American professor of mechanical engineering who is well known for his contributions to phase change heat transfer. He is presently a Curators' Distinguished Professor and Huber and Helen Croft Chair in Engineering in the Department of Mechanical and Aerospace Engineering at the University of Missouri in Columbia, Missouri.
Early life and education
Yuwen Zhang was born in 1964 in Xiaoyi, Shanxi, China and spent his early life there until 1981 when he graduated from Chengguan High School and was admitted to college. He earned his B.E. degree in thermal turbomachinery, M.E. and D.Eng. degrees in engineering thermophysics from Xi'an Jiaotong University, in 1985, 1988 and 1991, respectively. He received a Ph.D. degree in mechanical engineering from the University of Connecticut in 1998.
Career
He taught at Xi'an Jiaotong University from 1991 to 1994 and was a research associate at Wright State University (1994-1995) and University of Connecticut (1995-1996). He was a research scientist at University of Connecticut (1999-2000) and a senior engineer at Thermoflow, Inc. (2000) before joining the Department of Mechanical Engineering at New Mexico State University as an assistant professor in 2001. He joined the faculty at the Department of Mechanical and Aerospace Engineering at the University of Missouri (MU) in 2003 as an associate professor and became a full professor in 2009. He was awarded a James C. Dowell Professorship in 2012 and served as the Department Chair from 2013 to 2017. He was named a Curators' Distinguished Professor in 2020 and a Huber and Helen Croft Chair in Engineering in 2021.
Technical contributions
Yuwen Zhang's research area is in the field of heat and mass transfer with applications in nanomanufacturing, thermal management, and energy storage and conversion. He has published over 300 journal papers and more than 180 conference publications at national and international conferences.
He has developed pioneer models for a latent heat thermal energy storage system, as well as multiscale, multiphysics models on additive manufacturing (AM), including selective laser sintering (SLS) and laser chemical vapor deposition/infiltration (LCVD/LCVI). He is the first to develop fundamental models of fluid flow and heat transfer in the oscillating heat pipes, which is a heat transfer device that can be used in thermal management of electronic devices and energy systems. He carried out theoretical studies on femtosecond laser interaction with metal and biological materials from molecular scales to system levels, and solved inverse heat transfer problems for the determination of the heating condition and/or temperature-dependent macro and micro thermophysical properties under uncertainty. He also investigated the mechanism of heat transfer enhancement in nanofluids, which are stable colloidal suspensions of solid nanomaterials with sizes typically on the order of 1-100 nm in the base fluid, via molecular dynamics (MD) simulations.
Thermal management and temperature uniformity improvement of Li-ion batteries using external and internal cooling methods were also systematically studied by utilizing pin fin heat-sinks and metal/non-metal foams, as well as using electrolyte flow inside the embedded microchannels in the porous electrodes as a novel internal cooling technique. Moreover, he has pioneered application of AI and machine learning for efficient and accurate solution of multiphase heat and mass transfer and inverse heat conduction problems.
Honors and awards
Fellow, American Society of Thermal and Fluids Engineers (ASTFE), 2024
Huber and Helen Croft Chair in Engineering, University of Missouri, 2021
Curators' Distinguished Professor, University of Missouri, 2020
Coulter Award, University of Missouri Coulter Translational Partnership Program, 2018
Fellow, American Association for the Advancement of Sciences (AAAS), 2015
James C. Dowell Professorship, University of Missouri, 2012
Certificate of Appreciation for Service as K-15 Committee Chair, ASME Heat Transfer Division, 2014
Certificate of Distinguished Service, American Institute of Aeronautics and Astronautics (AIAA), 2011
Missouri Honor Senior Faculty Research Award, College of Engineering at the University of Missouri, 2010
Chancellor's Award for Outstanding Research and Creative Activity, University of Missouri, 2010
Fellow of American Society of Mechanical Engineers (ASME), 2007
Associate Fellow of American Institute of Aeronautics and Astronautics (AIAA), 2008
Faculty Fellow Award, College of Engineering at University of Missouri, 2007
Computational Research Award, Department of Mechanical Engineering, New Mexico State University, 2003
Young Investigator Award, Office of Naval Research (one of 26 awarded nationally in all fields), 2002
References
External links
Biographical information at the University of Missouri
University of Missouri faculty
Educators from Columbia, Missouri
People from Columbia, Missouri
University of Connecticut alumni
Xi'an Jiaotong University alumni
New Mexico State University faculty
Fluid dynamicists
Thermodynamicists
Fellows of the American Society of Mechanical Engineers
Fellows of the American Association for the Advancement of Science
American mechanical engineers
Chinese emigrants to the United States
Chinese mechanical engineers
1965 births
Living people | Yuwen Zhang | Physics,Chemistry | 1,020 |
692,929 | https://en.wikipedia.org/wiki/3%2C4-Methylenedioxyamphetamine | 3,4-Methylenedioxyamphetamine (MDA), sometimes referred to as sass, is an empathogen-entactogen, stimulant, and psychedelic drug of the amphetamine family that is encountered mainly as a recreational drug. In its pharmacology, MDA is a serotonin–norepinephrine–dopamine releasing agent (SNDRA). In most countries, the drug is a controlled substance and its possession and sale are illegal.
MDA is rarely sought as a recreational drug compared to other amphetamines; however, it remains widely used due to it being a primary metabolite, the product of hepatic N-dealkylation, of MDMA. It is also a common adulterant of illicitly produced MDMA.
Uses
Medical
MDA currently has no accepted medical use.
Recreational
MDA is bought, sold, and used as a recreational drug due to its enhancement of mood and empathy. A recreational dose of MDA is sometimes cited as being between 100 and 160 mg. It produces MDMA-like effects, including entactogen and psychedelic effects.
Side effects
Side effects of MDA include sympathomimetic effects like increased heart rate and blood pressure as well as increased cortisol and prolactin levels.
Overdose
Symptoms of acute toxicity may include agitation, sweating, increased blood pressure and heart rate, dramatic increase in body temperature, convulsions, and death. Death is usually caused by cardiac effects and subsequent hemorrhaging in the brain (stroke).
Pharmacology
Pharmacodynamics
MDA is a substrate of the serotonin, norepinephrine, dopamine, and vesicular monoamine transporters, and in relation to this, acts as a reuptake inhibitor and releasing agent of serotonin, norepinephrine, and dopamine (that is, it is an ). It is also an agonist of the serotonin 5-HT2A, 5-HT2B, and 5-HT2C receptors and shows affinity for the α2A-, α2B-, and α2C-adrenergic receptors and serotonin 5-HT1A and 5-HT7 receptors.
The (S)-optical isomer of MDA is more potent than the (R)-optical isomer as a psychostimulant, possessing greater affinity for the three monoamine transporters.
In terms of the subjective and behavioral effects of MDA, it is thought that serotonin release is required for its empathogenic effects, dopamine release is required for its euphoriant (rewarding and addictive) effects, dopamine and norepinephrine release is required for its psychostimulant effects, and direct agonism of the serotonin 5-HT2A receptor is required for its mild psychedelic effects.
In addition to its actions as a monoamine releasing agent, MDA is a potent high-efficacy partial agonist or full agonist of the rodent TAAR1. Conversely, MDA is much weaker in terms of potency as an agonist of the human TAAR1. Moreover, MDA acts as a very weak partial agonist or antagonist of the human TAAR1 rather than as an efficacious agonist. TAAR1 activation is thought to auto-inhibit and constrain the effects of amphetamines that act as TAAR1 agonists, for instance MDMA in rodents.
Pharmacokinetics
The pharmacokinetics of MDA have been studied. Its duration of action has been reported to be about 6 to 8hours. The duration of MDA is longer than that of MDMA, about 8hours for MDA versus 6hours for MDMA. The elimination half-life of MDA is 10.9hours. Differences in the duration of MDA versus MDMA may be due pharmacodynamics rather than pharmacokinetics.
Chemistry
MDA is a substituted methylenedioxylated phenethylamine and amphetamine derivative. In relation to other phenethylamines and amphetamines, it is the 3,4-methylenedioxy, α-methyl derivative of β-phenylethylamine, the 3,4-methylenedioxy derivative of amphetamine, and the N-desmethyl derivative of MDMA.
Synonyms
In addition to 3,4-methylenedioxyamphetamine, MDA is also known by other chemical synonyms such as the following:
α-Methyl-3,4-methylenedioxy-β-phenylethylamine
1-(3,4-Methylenedioxyphenyl)-2-propanamine
1-(1,3-Benzodioxol-5-yl)-2-propanamine
Synthesis
MDA is typically synthesized from essential oils such as safrole or piperonal. Common approaches from these precursors include:
Reaction of safrole's alkene functional group with a halogen containing mineral acid followed by amine alkylation.
Wacker oxidation of safrole to yield 3,4-methylenedioxyphenylpropan-2-one (MDP2P) followed by reductive amination or via reduction of its oxime.
Henry reaction of piperonal with nitroethane followed by nitro compound reduction.
Darzens reaction on heliotropin was also done by J. Elks, et al. This gives MDP2P, which was then subjected to a Leuckart reaction.
The "two dogs" or "dopeboy" clandestine method, starting with helional as a precursor. First, an oxime is created using hydoxylamine. Then, a Beckmann rearrangement is performed with nickel acetate to form the amide. Then a Hofmann rearrangement is done to form the freebase amine of MDA. Then it is purified with an acid base extraction.
Detection in body fluids
MDA may be quantitated in blood, plasma or urine to monitor for use, confirm a diagnosis of poisoning or assist in the forensic investigation of a traffic or other criminal violation or a sudden death. Some drug abuse screening programs rely on hair, saliva, or sweat as specimens. Most commercial amphetamine immunoassay screening tests cross-react significantly with MDA and major metabolites of MDMA, but chromatographic techniques can easily distinguish and separately measure each of these substances. The concentrations of MDA in the blood or urine of a person who has taken only MDMA are, in general, less than 10% those of the parent drug.
Derivatives
MDA constitutes part of the core structure of the β-adrenergic receptor agonist protokylol.
History
MDA was first synthesized by Carl Mannich and W. Jacobsohn in 1910. It was first ingested in July 1930 by Gordon Alles who later licensed the drug to Smith, Kline & French. MDA was first used in animal tests in 1939, and human trials began in 1941 in the exploration of possible therapies for Parkinson's disease. From 1949 to 1957, more than five hundred human subjects were given MDA in an investigation of its potential use as an antidepressant and/or anorectic by Smith, Kline & French. The United States Army also experimented with the drug, code named EA-1298, while working to develop a truth drug or incapacitating agent. Harold Blauer died in January 1953 after being intravenously injected, without his knowledge or consent, with 450 mg of the drug as part of Project MKUltra. MDA was patented as an ataractic by Smith, Kline & French in 1960, and as an anorectic under the trade name "Amphedoxamine" in 1961. MDA began to appear on the recreational drug scene around 1963 to 1964. It was then inexpensive and readily available as a research chemical from several scientific supply houses. Several researchers, including Claudio Naranjo and Richard Yensen, have explored MDA in the field of psychotherapy.
The International Nonproprietary Name (INN) tenamfetamine was recommended by the World Health Organization (WHO) in 1986. It was recommended in the same published list in which the INN of 2,5-dimethoxy-4-bromoamphetamine (DOB), brolamfetamine, was recommended. These events suggest that MDA and DOB were under development as potential pharmaceutical drugs at the time.
Society and culture
Name
When MDA was under development as a potential pharmaceutical drug, it was given the International Nonproprietary Name (INN) of tenamfetamine.
Legal status
Australia
MDA is schedule 9 prohibited substance under the Poisons Standards. A schedule 9 substance is listed as a "Substances which may be abused or misused, the manufacture, possession, sale or use of which should be prohibited by law except when required for medical or scientific research, or for analytical, teaching or training purposes with approval of Commonwealth and/or State or Territory Health Authorities."
United States
MDA is a Schedule I controlled substance in the US.
Research
In 2010, the ability of MDA to invoke mystical experiences and alter vision in healthy volunteers was studied. The study concluded that MDA is a "potential tool to investigate mystical experiences and visual perception".
A 2019 double-blind study administered both MDA and MDMA to healthy volunteers. The study found that MDA shared many properties with MDMA including entactogenic and stimulant effects, but generally lasted longer and produced greater increases in psychedelic-like effects like complex imagery, synesthesia, and spiritual experiences.
Adverse effects
MDA can produce serotonergic neurotoxic effects in rodents, which might in part be due to transformation into MDA followed by subsequent metabolism. In addition, MDA activates a response of the neuroglia, though this subsides after use.
See also
MDMA
2,3-Methylenedioxyamphetamine
4,4'-Methylenedianiline
Malondialdehyde
References
External links
Erowid MDA Vault
MDA entry in PiHKAL
MDA entry in PiHKAL • info
5-HT2A agonists
5-HT2B agonists
5-HT2C agonists
Benzodioxoles
Entactogens and empathogens
Euphoriants
Human drug metabolites
Human pathological metabolites
Monoaminergic neurotoxins
Psychedelic drugs
Recreational drug metabolites
Serotonin-norepinephrine-dopamine releasing agents
Serotonin receptor agonists
Stimulants
Substituted amphetamines
TAAR1 agonists
TAAR1 antagonists
VMAT inhibitors | 3,4-Methylenedioxyamphetamine | Chemistry | 2,291 |
73,468,693 | https://en.wikipedia.org/wiki/K2-415 | K2-415 is an M5 red dwarf star located 72 light-years from Earth. K2-415 has a mass that is 16% of the mass of the Sun.
Planetary system
The star has one known planet orbiting it: K2-415b.
References
Cancer (constellation)
M-type main-sequence stars
Planetary systems with one confirmed planet
J09084885+1151411
5557
323687123 | K2-415 | Astronomy | 92 |
46,363,781 | https://en.wikipedia.org/wiki/Extract%2C%20load%2C%20transform | Extract, load, transform (ELT) is an alternative to extract, transform, load (ETL) used with data lake implementations. In contrast to ETL, in ELT models the data is not transformed on entry to the data lake, but stored in its original raw format. This enables faster loading times. However, ELT requires sufficient processing power within the data processing engine to carry out the transformation on demand, to return the results in a timely manner. Since the data is not processed on entry to the data lake, the query and schema do not need to be defined a priori (although often the schema will be available during load since many data sources are extracts from databases or similar structured data systems and hence have an associated schema). ELT is a data pipeline model.
Benefits
Some of the benefits of an ELT process include speed and the ability to handle both structured and unstructured data.
Cloud data lake components
Common storage options
AWS
Simple Storage Service (S3)
Amazon RDS
Azure
Azure Blob Storage
GCP
Google Storage (GCS)
Querying
AWS
Redshift Spectrum
Athena
EMR (Presto)
Azure
Azure Data Lake
GCP
BigQuery
References
External links
Dull, Tamara, "The Data Lake Debate: Pro is Up First", smartdatacollective.com, March 20, 2015.
ELT: Extract, Load, and Transform A Complete Guide | Astera Software
Data warehousing | Extract, load, transform | Technology | 299 |
3,409,584 | https://en.wikipedia.org/wiki/XMDR | The Extended Metadata Registry (XMDR) is a project proposing and testing a set of extensions to the ISO/IEC 11179 metadata registry specifications that deal with the development of improved standards and technology for storing and retrieving the semantics of data elements, terminologies, and concept structures in metadata registries.
External links
XMDR web site
See also
metadata
Metadata registry
ISO/IEC 11179
XML
Metadata registry | XMDR | Technology | 87 |
187,916 | https://en.wikipedia.org/wiki/Man%20in%20the%20Moon | In many cultures, several pareidolic images of a human face, head or body are recognized in the disc of the full moon; they are generally known as the Man in the Moon. The images are based on the appearance of the dark areas (known as lunar maria) and the lighter-colored highlands (and some lowlands) of the lunar surface.
Origin
There are various explanations for how the Man in the Moon came to be.
A longstanding European tradition holds that the man was banished to the Moon for some crime. Jewish lore says that the image of Jacob is engraved on the Moon. Another held that he is the man caught gathering sticks on the Sabbath and sentenced by God to death by stoning in the Book of Numbers XV.32–36. Some Germanic cultures thought he was a woodcutter found working on the Sabbath. There is a Roman legend that he is a sheep-thief.
One medieval Christian tradition claims that he is Cain, the Wanderer, forever doomed to circle the Earth. Dante's Inferno alludes to this:
For now doth Cain with fork of thorns confine
On either hemisphere, touching the wave
Beneath the towers of Seville. Yesternight
The moon was round.
This is mentioned again in his Paradise:
But tell, I pray thee, whence the gloomy spots
Upon this body, which below on earth
Give rise to talk of Cain in fabling quaint?
John Lyly says in the prologue to his Endymion (1591), "There liveth none under the sunne, that knows what to make of the man in the moone."
In Norse mythology, Máni is the male personification of the Moon who crosses the sky in a horse-drawn carriage. He is continually pursued by the Great Wolf Hati who catches him at Ragnarök. Máni simply means "Moon".
In Chinese mythology, the goddess Chang'e is stranded upon the Moon after consuming a double dose of an immortality potion. In some versions of the myth, she is accompanied by Yu Tu, a Moon rabbit. Another mythology tells the story of Wu Gang, a man on the Moon who is trying to cut down a tree that always regrows.
In Haida mythology, the figure represents a boy gathering sticks. The boy's father had told him the Moon's light would brighten the night, allowing the chore to be completed. Not wanting to gather sticks, the boy complained and ridiculed the Moon. As punishment for his disrespect, the boy was taken from Earth and trapped on the Moon.
In Japanese mythology, it is said that a tribe of human-like spiritual beings live on the Moon. This is especially explored in The Tale of the Bamboo Cutter.
In Vietnamese mythology, the Man in the Moon is named Cuội. He was originally a woodcutter on Earth who owned a magical banyan. One day, when his wife ignorantly watered the tree with unclean water and caused it to uproot itself to fly away, Cuội grabbed its roots and was taken to the Moon. There, he eternally accompanied the Moon Lady and the Jade Rabbit. The trio has become the personifications of the Tết Trung Thu, when they descend to the mortal world and give out cellophane lanterns, mooncakes and gifts to children.
In Latvian legends, two maidens went naked from the sauna with carrying poles to the well. While collecting water, one of the women noted how beautiful the moon is. The other was unimpressed, saying her bottom was prettier and proceeded to moon the moon. As a punishment, either Dievs or Mēness (Moon deity) put the woman along with a carrying pole on the moon, with her bottom now visible to everyone.
Traditions
There is a traditional European belief that the Man in the Moon enjoyed drinking, especially claret. An old ballad runs (original spelling):
Our man in the moon drinks clarret,
With powder-beef, turnep, and carret.
If he doth so, why should not you
Drink until the sky looks blew?
In the English Middle Ages and renaissance, the Moon was held to be the god of drunkards, and at least three London taverns were named "The Man in the Moone".
The man in the Moon is named in an early dated English nursery rhyme:
The man in the moon came tumbling down
And asked his way to Norwich;
He went by the south and burnt his mouth
With supping cold pease porridge.
Examples and occurrence globally
One tradition sees a figure of a man carrying a wide burden on his back. He is sometimes seen as accompanied by a small dog. Various cultures recognise other examples of lunar pareidolia, such as the Moon rabbit.
In the Northern Hemisphere, a common Western perception of the face has it that the figure's eyes are Mare Imbrium and Mare Serenitatis, its nose is Sinus Aestuum, and its open mouth is Mare Nubium and Mare Cognitum. This particular human face can also be seen in tropical regions on both sides of the equator. However, the Moon orientation associated with the face is observed less frequently—and eventually not at all—as one moves toward the South Pole.
Conventionalized illustrations of the Man in the Moon seen in Western art often show a very simple face in the full moon, or a human profile in the crescent moon, corresponding to no actual markings. Some depict a man with a face turned away from the viewer on the ground, for example when viewed from North America, with Jesus Christ's crown shown as the lighter ring around Mare Imbrium. Another common one is a cowled Death's head looking down at Earth, with the black lava rock 'hood' around the white dust bone of the skull, and also forming the eye sockets.
"The Man in the Moon" can also refer to a mythological character said to live on or in the Moon, but who is not necessarily represented by the markings on the face of the Moon. An example is Yue-Laou, from Chinese tradition; another is Aiken Drum from Scotland.
The Man in the Moone by Francis Godwin, published in 1638, is one of the earliest novels thought of as containing several traits prototypical of science fiction.
Scientific explanation
The Man in the Moon is made up of various lunar maria (which ones depend on the pareidolic image seen). These vast, flat spots on the Moon are called "maria" or "seas" because, for a long time, astronomers believed they were large bodies of water. They are large areas formed by lava that covered up old craters and then cooled, becoming smooth, basalt rock.
The near side of the Moon with these maria that make up the man is always facing Earth due to a tidal locking, or synchronous orbit. Thought to have occurred because of the gravitational forces partially caused by the Moon's oblong shape, its rotation has slowed to the point where it rotates exactly once on each trip around the Earth. This causes the same side of the Moon to always face toward Earth.
Gallery
See also
The Moon is made of green cheese
Moon rabbit
References
Further reading
External links
Man in the Moon lore
Moon Illusions
The Man in the Moon and other weird things
The Man in the Moon
Moon in culture
Moon myths
Mythological characters
Pareidolia | Man in the Moon | Astronomy | 1,507 |
77,739,301 | https://en.wikipedia.org/wiki/Cipher%20device | A cipher device was a term used by the US military in the first half of the 20th century to describe a manually operated cipher equipment that converted the plaintext into ciphertext or vice versa. A similar term, cipher machine, was used to describe the cipher equipment that required external power for operation. Cipher box or crypto box is a physical cryptographic device used to encrypt and decrypt messages between plaintext (unencrypted) and ciphertext (encrypted or secret) forms. The ciphertext is suitable for transmission over a channel, such as radio, that might be observed by an adversary the communicating parties wish to conceal the plaintext from.
See also
Cryptography
References
Sources
Cryptography | Cipher device | Mathematics,Engineering | 148 |
816,012 | https://en.wikipedia.org/wiki/Psychoneuroimmunology | Psychoneuroimmunology (PNI), also referred to as psychoendoneuroimmunology (PENI) or psychoneuroendocrinoimmunology (PNEI), is the study of the interaction between psychological processes and the nervous and immune systems of the human body. It is a subfield of psychosomatic medicine. PNI takes an interdisciplinary approach, incorporating psychology, neuroscience, immunology, physiology, genetics, pharmacology, molecular biology, psychiatry, behavioral medicine, infectious diseases, endocrinology, and rheumatology.
The main interests of PNI are the interactions between the nervous and immune systems and the relationships between mental processes and health. PNI studies, among other things, the physiological functioning of the neuroimmune system in health and disease; disorders of the neuroimmune system (autoimmune diseases; hypersensitivities; immune deficiency); and the physical, chemical and physiological characteristics of the components of the neuroimmune system in vitro, in situ, and in vivo.
History
Interest in the relationship between psychiatric syndromes or symptoms and immune function has been a consistent theme since the beginning of modern medicine.
Claude Bernard, a French physiologist of the Muséum national d'Histoire naturelle (National Museum of Natural History in English), formulated the concept of the milieu interieur in the mid-1800s. In 1865, Bernard described the perturbation of this internal state: "... there are protective functions of organic elements holding living materials in reserve and maintaining without interruption humidity, heat and other conditions indispensable to vital activity. Sickness and death are only a dislocation or perturbation of that mechanism" (Bernard, 1865). Walter Cannon, a professor of physiology at Harvard University coined the commonly used term, homeostasis, in his book The Wisdom of the Body, 1932, from the Greek word homoios, meaning similar, and stasis, meaning position. In his work with animals, Cannon observed that any change of emotional state in the beast, such as anxiety, distress, or rage, was accompanied by total cessation of movements of the stomach (Bodily Changes in Pain, Hunger, Fear and Rage, 1915). These studies looked into the relationship between the effects of emotions and perceptions on the autonomic nervous system, namely the sympathetic and parasympathetic responses that initiated the recognition of the freeze, fight or flight response. His findings were published from time to time in professional journals, then summed up in book form in The Mechanical Factors of Digestion, published in 1911.
Hans Selye, a student of Johns Hopkins University and McGill University, and a researcher at Université de Montréal, experimented with animals by putting them under different physical and mental adverse conditions and noted that under these difficult conditions the body consistently adapted to heal and recover. Several years of experimentation that formed the empiric foundation of Selye's concept of the General Adaptation Syndrome. This syndrome consists of an enlargement of the adrenal gland, atrophy of the thymus, spleen, and other lymphoid tissue, and gastric ulcerations.
Selye describes three stages of adaptation, including an initial brief alarm reaction, followed by a prolonged period of resistance, and a terminal stage of exhaustion and death. This foundational work led to a rich line of research on the biological functioning of glucocorticoids.
Mid-20th century studies of psychiatric patients reported immune alterations in psychotic individuals, including lower numbers of lymphocytes and poorer antibody response to pertussis vaccination, compared with nonpsychiatric control subjects. In 1964, George F. Solomon, from the University of California in Los Angeles, and his research team coined the term "psychoimmunology" and published a landmark paper: "Emotions, immunity, and disease: a speculative theoretical integration."
Origins
In 1975, Robert Ader and Nicholas Cohen, at the University of Rochester, advanced PNI with their demonstration of classic conditioning of immune function, and they subsequently coined the term "psychoneuroimmunology". Ader was investigating how long conditioned responses (in the sense of Pavlov's conditioning of dogs to drool when they heard a bell ring) might last in laboratory rats. To condition the rats, he used a combination of saccharin-laced water (the conditioned stimulus) and the drug Cytoxan, which unconditionally induces nausea and taste aversion and suppression of immune function. Ader was surprised to discover that after conditioning, just feeding the rats saccharin-laced water was associated with the death of some animals and he proposed that they had been immunosuppressed after receiving the conditioned stimulus. Ader (a psychologist) and Cohen (an immunologist) directly tested this hypothesis by deliberately immunizing conditioned and unconditioned animals, exposing these and other control groups to the conditioned taste stimulus, and then measuring the amount of antibody produced. The highly reproducible results revealed that conditioned rats exposed to the conditioned stimulus were indeed immunosuppressed. In other words, a signal via the nervous system (taste) was affecting immune function. This was one of the first scientific experiments that demonstrated that the nervous system can affect the immune system.
In the 1970s, Hugo Besedovsky, Adriana del Rey and Ernst Sorkin, working in Switzerland, reported multi-directional immune-neuro-endocrine interactions, since they show that not only the brain can influence immune processes but also the immune response itself can affect the brain and neuroendocrine mechanisms. They found that the immune responses to innocuous antigens triggers an increase in the activity of hypothalamic neurons and hormonal and autonomic nerve responses that are relevant for immunoregulation and are integrated at brain levels (see review). On these bases, they proposed that the immune system acts as a sensorial receptor organ that, besides its peripheral effects, can communicate to the brain and associated neuro-endocrine structures its state of activity. These investigators also identified products from immune cells, later characterized as cytokines, that mediate this immune-brain communication (more references in).
In 1981, David L. Felten, then working at the Indiana University School of Medicine, and his colleague JM Williams, discovered a network of nerves leading to blood vessels as well as cells of the immune system. The researchers also found nerves in the thymus and spleen terminating near clusters of lymphocytes, macrophages, and mast cells, all of which help control immune function. This discovery provided one of the first indications of how neuro-immune interaction occurs.
Ader, Cohen, and Felten went on to edit the groundbreaking book Psychoneuroimmunology in 1981, which laid out the underlying premise that the brain and immune system represent a single, integrated system of defense.
In 1985, research by neuropharmacologist Candace Pert, of the National Institutes of Health at Georgetown University, revealed that neuropeptide-specific receptors are present on the cell walls of both the brain and the immune system. The discovery that neuropeptides and neurotransmitters act directly upon the immune system shows their close association with emotions and suggests mechanisms through which emotions, from the limbic system, and immunology are deeply interdependent. Showing that the immune and endocrine systems are modulated not only by the brain but also by the central nervous system itself affected the understanding of emotions, as well as disease.
Contemporary advances in psychiatry, immunology, neurology, and other integrated disciplines of medicine has fostered enormous growth for PNI. The mechanisms underlying behaviorally induced alterations of immune function, and immune alterations inducing behavioral changes, are likely to have clinical and therapeutic implications that will not be fully appreciated until more is known about the extent of these interrelationships in normal and pathophysiological states.
The immune-brain loop
PNI research looks for the exact mechanisms by which specific neuroimmune effects are achieved. Evidence for nervous-immunological interactions exist at multiple biological levels.
The immune system and the brain communicate through signaling pathways. The brain and the immune system are the two major adaptive systems of the body. Two major pathways are involved in this cross-talk: the Hypothalamic-pituitary-adrenal axis (HPA axis), and the sympathetic nervous system (SNS), via the sympathetic-adrenal-medullary axis (SAM axis). The activation of SNS during an immune response might be aimed to localize the inflammatory response.
The body's primary stress management system is the HPA axis. The HPA axis responds to physical and mental challenge to maintain homeostasis in part by controlling the body's cortisol level. Dysregulation of the HPA axis is implicated in numerous stress-related diseases, with evidence from meta-analyses indicating that different types/duration of stressors and unique personal variables can shape the HPA response. HPA axis activity and cytokines are intrinsically intertwined: inflammatory cytokines stimulate adrenocorticotropic hormone (ACTH) and cortisol secretion, while, in turn, glucocorticoids suppress the synthesis of proinflammatory cytokines.
Molecules called pro-inflammatory cytokines, which include interleukin-1 (IL-1), Interleukin-2 (IL-2), interleukin-6 (IL-6), Interleukin-12 (IL-12), Interferon-gamma (IFN-Gamma) and tumor necrosis factor alpha (TNF-alpha) can affect brain growth as well as neuronal function. Circulating immune cells such as macrophages, as well as glial cells (microglia and astrocytes) secrete these molecules. Cytokine regulation of hypothalamic function is an active area of research for the treatment of anxiety-related disorders.
Cytokines mediate and control immune and inflammatory responses. Complex interactions exist between cytokines, inflammation and the adaptive responses in maintaining homeostasis. Like the stress response, the inflammatory reaction is crucial for survival. Systemic inflammatory reaction results in stimulation of four major programs:
the acute-phase reaction
sickness behavior
the pain program
the stress response
These are mediated by the HPA axis and the SNS. Common human diseases such as allergy, autoimmunity, chronic infections and sepsis are characterized by a dysregulation of the pro-inflammatory versus anti-inflammatory and T helper (Th1) versus (Th2) cytokine balance.
Recent studies show pro-inflammatory cytokine processes take place during depression, mania and bipolar disease, in addition to autoimmune hypersensitivity and chronic infections.
Chronic secretion of stress hormones, glucocorticoids (GCs) and catecholamines (CAs), as a result of disease, may reduce the effect of neurotransmitters, including serotonin, norepinephrine and dopamine, or other receptors in the brain, thereby leading to the dysregulation of neurohormones. Under stimulation, norepinephrine is released from the sympathetic nerve terminals in organs, and the target immune cells express adrenoreceptors. Through stimulation of these receptors, locally released norepinephrine, or circulating catecholamines such as epinephrine, affect lymphocyte traffic, circulation, and proliferation, and modulate cytokine production and the functional activity of different lymphoid cells.
Glucocorticoids also inhibit the further secretion of corticotropin-releasing hormone from the hypothalamus and ACTH from the pituitary (negative feedback). Under certain conditions stress hormones may facilitate inflammation through induction of signaling pathways and through activation of the corticotropin-releasing hormone.
These abnormalities and the failure of the adaptive systems to resolve inflammation affect the well-being of the individual, including behavioral parameters, quality of life and sleep, as well as indices of metabolic and cardiovascular health, developing into a "systemic anti-inflammatory feedback" and/or "hyperactivity" of the local pro-inflammatory factors which may contribute to the pathogenesis of disease.
This systemic or neuro-inflammation and neuroimmune activation have been shown to play a role in the etiology of a variety of neurodegenerative disorders such as Parkinson's and Alzheimer's disease, multiple sclerosis, pain, and AIDS-associated dementia. However, cytokines and chemokines also modulate central nervous system (CNS) function in the absence of overt immunological, physiological, or psychological challenges.
Psychoneuroimmunological effects
There are now sufficient data to conclude that immune modulation by psychosocial stressors and/or interventions can lead to actual health changes. Although changes related to infectious disease and wound healing have provided the strongest evidence to date, the clinical importance of immunological dysregulation is highlighted by increased risks across diverse conditions and diseases. For example, stressors can produce profound health consequences. In one epidemiological study, all-cause mortality increased in the month following a severe stressor – the death of a spouse. Theorists propose that stressful events trigger cognitive and affective responses which, in turn, induce sympathetic nervous system and endocrine changes, and these ultimately impair immune function. Potential health consequences are broad, but include rates of infection HIV progression cancer incidence and progression, and high rates of infant mortality.
Understanding stress and immune function
Stress is thought to affect immune function through emotional and/or behavioral manifestations such as anxiety, fear, tension, anger and sadness and physiological changes such as heart rate, blood pressure, and sweating. Researchers have suggested that these changes are beneficial if they are of limited duration, but when stress is chronic, the system is unable to maintain equilibrium or homeostasis; the body remains in a state of arousal, where digestion is slower to reactivate or does not reactivate properly, often resulting in indigestion. Furthermore, blood pressure stays at higher levels.
In one of the earlier PNI studies, which was published in 1960, subjects were led to believe that they had accidentally caused serious injury to a companion through misuse of explosives. Since then decades of research resulted in two large meta-analyses, which showed consistent immune dysregulation in healthy people who are experiencing stress.
In the first meta-analysis by Herbert and Cohen in 1993, they examined 38 studies of stressful events and immune function in healthy adults. They included studies of acute laboratory stressors (e.g. a speech task), short-term naturalistic stressors (e.g. medical examinations), and long-term naturalistic stressors (e.g. divorce, bereavement, caregiving, unemployment). They found consistent stress-related increases in numbers of total white blood cells, as well as decreases in the numbers of helper T cells, suppressor T cells, and cytotoxic T cells, B cells, and natural killer cells (NK). They also reported stress-related decreases in NK and T cell function, and T cell proliferative responses to phytohaemagglutinin [PHA] and concanavalin A [Con A]. These effects were consistent for short-term and long-term naturalistic stressors, but not laboratory stressors.
In the second meta-analysis by Zorrilla et al. in 2001, they replicated Herbert and Cohen's meta-analysis. Using the same study selection procedures, they analyzed 75 studies of stressors and human immunity. Naturalistic stressors were associated with increases in number of circulating neutrophils, decreases in number and percentages of total T cells and helper T cells, and decreases in percentages of natural killer cell (NK) cells and cytotoxic T cell lymphocytes. They also replicated Herbert and Cohen's finding of stress-related decreases in NKCC and T cell mitogen proliferation to phytohaemagglutinin (PHA) and concanavalin A (Con A).
A study done by the American Psychological Association did an experiment on rats, where they applied electrical shocks to a rat, and saw how interleukin-1 was released directly into the brain. Interleukin-1 is the same cytokine released when a macrophage chews on a bacterium, which then travels up the vagus nerve, creating a state of heightened immune activity, and behavioral changes.
More recently, there has been increasing interest in the links between interpersonal stressors and immune function. For example, marital conflict, loneliness, caring for a person with a chronic medical condition, and other forms on interpersonal stress dysregulate immune function.
Communication between the brain and immune system
Stimulation of brain sites alters immunity (stressed animals have altered immune systems).
Damage to brain hemispheres alters immunity (hemispheric lateralization effects).
Immune cells produce cytokines that act on the CNS.
Immune cells respond to signals from the CNS.
Communication between neuroendocrine and immune system
Glucocorticoids and catecholamines influence immune cells.
Hypothalamic Pituitary Adrenal axis releases the needed hormones to support the immune system.
Activity of the immune system is correlated with neurochemical/neuroendocrine activity of brain cells.
Connections between glucocorticoids and immune system
Anti-inflammatory hormones that enhance the organism's response to a stressor.
Prevent the overreaction of the body's own defense system.
Overactivation of glucocorticoid receptors can lead to health risks.
Regulators of the immune system.
Affect cell growth, proliferation and differentiation.
Cause immunosuppression which can lead to an extended amount of time fighting off infections.
High basal levels of cortisol are associated with a higher risk of infection.
Suppress cell adhesion, antigen presentation, chemotaxis and cytotoxicity.
Increase apoptosis.
Corticotropin-releasing hormone (CRH)
Release of corticotropin-releasing hormone (CRH) from the hypothalamus is influenced by stress.
CRH is a major regulator of the HPA axis/stress axis.
CRH Regulates secretion of adrenocorticotropic hormone (ACTH).
CRH is widely distributed in the brain and periphery
CRH also regulates the actions of the Autonomic nervous system ANS and immune system.
Furthermore, stressors that enhance the release of CRH suppress the function of the immune system; conversely, stressors that depress CRH release potentiate immunity.
Central mediated since peripheral administration of CRH antagonist does not affect immunosuppression.
HPA axis/stress axis responds consistently to stressors that are new, unpredictable and that have low-perceived control.
As cortisol reaches an appropriate level in response to the stressor, it deregulates the activity of the hippocampus, hypothalamus, and pituitary gland which results in less production of cortisol.
Relationships between prefrontal cortex activation and cellular senescence
Psychological stress is regulated by the prefrontal cortex (PFC)
The PFC modulates vagal activity
Prefrontally modulated and vagally mediated cholinergic input to the spleen reduces inflammatory responses
Pharmaceutical advances
Glutamate agonists, cytokine inhibitors, vanilloid-receptor agonists, catecholamine modulators, ion-channel blockers, anticonvulsants, GABA agonists (including opioids and cannabinoids), COX inhibitors, acetylcholine modulators, melatonin analogs (such as Ramelton), adenosine receptor antagonists and several miscellaneous drugs (including biologics like Passiflora edulis) are being studied for their psychoneuroimmunological effects.
For example, SSRIs, SNRIs and tricyclic antidepressants acting on serotonin, norepinephrine, dopamine and cannabinoid receptors have been shown to be immunomodulatory and anti-inflammatory against pro-inflammatory cytokine processes, specifically on the regulation of IFN-gamma and IL-10, as well as TNF-alpha and IL-6 through a psychoneuroimmunological process. Antidepressants have also been shown to suppress TH1 upregulation.
Tricyclic and dual serotonergic-noradrenergic reuptake inhibition by SNRIs (or SSRI-NRI combinations), have also shown analgesic properties additionally. According to recent evidences antidepressants also seem to exert beneficial effects in experimental autoimmune neuritis in rats by decreasing Interferon-beta (IFN-beta) release or augmenting NK activity in depressed patients.
These studies warrant investigation of antidepressants for use in both psychiatric and non-psychiatric illness and that a psychoneuroimmunological approach may be required for optimal pharmacotherapy in many diseases. Future antidepressants may be made to specifically target the immune system by either blocking the actions of pro-inflammatory cytokines or increasing the production of anti-inflammatory cytokines.
The endocannabinoid system appears to play a significant role in the mechanism of action of clinically effective and potential antidepressants and may serve as a target for drug design and discovery. The endocannabinoid-induced modulation of stress-related behaviors appears to be mediated, at least in part, through the regulation of the serotoninergic system, by which cannabinoid CB1 receptors modulate the excitability of dorsal raphe serotonin neurons. Data suggest that the endocannabinoid system in cortical and subcortical structures is differentially altered in an animal model of depression and that the effects of chronic, unpredictable stress (CUS) on CB1 receptor binding site density are attenuated by antidepressant treatment while those on endocannabinoid content are not.
The increase in amygdalar CB1 receptor binding following imipramine treatment is consistent with prior studies which collectively demonstrate that several treatments which are beneficial to depression, such as electroconvulsive shock and tricyclic antidepressant treatment, increase CB1 receptor activity in subcortical limbic structures, such as the hippocampus, amygdala and hypothalamus. And preclinical studies have demonstrated the CB1 receptor is required for the behavioral effects of noradrenergic based antidepressants but is dispensable for the behavioral effect of serotonergic based antidepressants.
Extrapolating from the observations that positive emotional experiences boost the immune system, Roberts speculates that intensely positive emotional experiences—sometimes brought about during mystical experiences occasioned by psychedelic medicines—may boost the immune system powerfully. Research on salivary IgA supports this hypothesis, but experimental testing has not been done.
See also
Branches of medicine
Biological psychiatry
Psychoneuroendocrinology
Neuroanatomy
Neurobiology
Neurochemistry
Neurophysics
Neuroanatomy
Locus ceruleus
Pedunculopontine nucleus
Raphe nucleus
Reticular activating system
Suprachiasmatic nucleus
Related topics
Allostatic load
Fight-or-flight response
Healing environments
Immuno-psychiatry
Neural top down control of physiology
PANDAS
Post-traumatic stress disorder
Cholinergic anti-inflammatory pathway
Ecoimmunology
References
Further reading
Berczi and Szentivanyi (2003). NeuroImmune Biology, Elsevier, (Written for the highly technical reader)
Goodkin, Karl, and Adriaan P. Visser, (eds), Psychoneuroimmunology: Stress, Mental Disorders, and Health, American Psychiatric Press, 2000, , technical.
Maqueda, A. "Psychosomatic Medicine, Psychoneuroimmunology and Psychedelics", Multidisciplinary Association for Psychedelic Studies, Vol xxi No 1.
Ransohoff, Richard, et al. (eds), Universes in Delicate Balance: Chemokines and the Nervous System, Elsevier, 2002,
Robert Ader, David L. Felten, Nicholas Cohen, Psychoneuroimmunology, 4th edition, 2 volumes, Academic Press, (2006),
Hafner Mateja, Ihan Alojz (2014). AWAKENING: Psyche in search of the lost Eros - psychoneuroimmunology, Alpha Center d.o.o., Institute for preventive medicine, .
External links
Psychoneuroimmunology Research Society
Home page of Robert Ader - University of Rochester
Cousins Center for Psychoneuroimmunology
Biochemical Aspects of Anxiety
Peruvian Institute of Psychoneuroimmunology
Institute for preventive medicine, Ljubljana, Slovenia
Health realization
Branches of immunology
Neurology
Mind–body interventions
Behavioral neuroscience
Somatic psychology
Psychoneuroimmunology | Psychoneuroimmunology | Biology | 5,278 |
4,520,368 | https://en.wikipedia.org/wiki/Ionic%20transfer | Ionic transfer is the transfer of ions from one liquid phase to another. This is related to the phase transfer catalysts which are a special type of liquid-liquid extraction which is used in synthetic chemistry.
For instance nitrate anions can be transferred between water and nitrobenzene. One way to observe this is to use a cyclic voltammetry experiment where the liquid-liquid interface is the working electrode. This can be done by placing secondary electrodes in each phase and close to interface each phase has a reference electrode. One phase is attached to a potentiostat which is set to zero volts, while the other potentiostat is driven with a triangular wave. This experiment is known as a polarised Interface between Two Immiscible Electrolyte Solutions (ITIES) experiment.
See also
Diffusion potential
References
Physical chemistry
Ions | Ionic transfer | Physics,Chemistry | 170 |
5,967,919 | https://en.wikipedia.org/wiki/Polyphosphazene | Polyphosphazenes include a wide range of hybrid inorganic-organic polymers with a number of different skeletal architectures with the backbone P-N-P-N-P-N-. In nearly all of these materials two organic side groups are attached to each phosphorus center. Linear polymers have the formula (N=PR1R2)n, where R1 and R2 are organic (see graphic). Other architectures are cyclolinear and cyclomatrix polymers in which small phosphazene rings are connected together by organic chain units. Other architectures are available, such as block copolymer, star, dendritic, or comb-type structures. More than 700 different polyphosphazenes are known, with different side groups (R) and different molecular architectures. Many of these polymers were first synthesized and studied in the research group of Harry R. Allcock.
Synthesis
The method of synthesis depends on the type of polyphosphazene. The most widely used method for linear polymers is based on a two-step process. In the first step, hexachlorocyclotriphosphazene(NPCl2)3 is heated in a sealed system at 250 °C to convert it to a long chain linear polymer with typically 15,000 or more repeating units. In the second step the chlorine atoms linked to phosphorus in the polymer are replaced by organic groups through reactions with alkoxides, aryloxides, amines or organometallic reagents. Because many different reagents can participate in this macromolecular substitution reaction, and because two or more reagents may be used, a large number of different polymers can be produced.. Variations to this process are possible using poly(dichlorophosphazene) made by condensation reactions.
Another synthetic process uses Cl3PNSiMe3 as a precursor:
n Cl3PNSiMe3 --> [Cl2PN]n + ClSiMe3
Because the process is a living cationic polymerization, block copolymers or comb, star, or dendritic architectures are possible. Other synthetic methods include the condensation reactions of organic-substituted phosphoranimines.
Cyclomatrix type polymers made by linking small molecule phosphazene rings together employ difunctional organic reagents to replace the chlorine atoms in (NPCl2)3, or the introduction of allyl or vinyl substituents, which are then polymerized by free-radical methods. Such polymers may be useful as coatings or thermosetting resins, often prized for their thermal stability.
Properties and uses
The linear high polymers have the geometry shown in the picture. More than 700 different macromolecules that correspond to e group]]s or combinations of different side groups. In these polymers the properties are defined by the high flexibility of the backbone. Other potentially attractive properties include radiation resistance, high refractive index, ultraviolet and visible transparency, and its fire resistance. The side groups exert an equal or even greater influence on the properties since they impart properties such as hydrophobicity, hydrophilicity, color, useful biological properties such as bioerodibility, or ion transport properties to the polymers. Representative examples of these polymers are shown below.
Thermoplastics
The first stable thermoplastic poly(organophosphazenes), isolated in the mid 1960s by Allcock, Kugel, and Valan, were macromolecules with trifluoroethoxy, phenoxy, methoxy, ethoxy, or various amino side groups. Of these early species, poly[bis(trifluoroethoxyphosphazene], [NP(OCH2CF3)2]n, has proved to be the subject of intense research due to its crystallinity, high hydrophobicity, biological compatibility, fire resistance, general radiation stability, and ease of fabrication into films, microfibers and nanofibers. It has also been a substrate for various surface reactions to immobilize biological agents. The polymers with phenoxy or amino side groups have also been studied in detail.
Phosphazene elastomers
The first large-scale commercial uses for linear polyphosphazenes were in the field of high technology elastomers, with a typical example containing a combination of trifluoroethoxy and longer chain fluoroalkoxy groups. The mixture of two different side groups eliminates the crystallinity found in single-substituent polymers and allows the inherent flexibility and elasticity to become manifest. Glass transition temperatures as low as -60 °C are attainable, and properties such as oil-resistance and hydrophobicity are responsible for their utility in land vehicles and aerospace components. They have also been used in biostable biomedical devices.
Other side groups, such as non-fluorinated alkoxy or oligo-alkyl ether units, yield hydrophilic or hydrophobic elastomers with glass transitions over a broad range from -100 °C to 100 °C. Polymers with two different aryloxy side groups have also been developed as elastomers for fire-resistance as well as thermal and sound insulation applications.
Polymer electrolytes
Linear polyphosphazenes with oligo-ethyleneoxy side chains are gums that are good solvents for salts such as lithium triflate. These solutions function as electrolytes for lithium ion transport, and they were incorporated into fire-resistant rechargeable lithium-ion polymer battery. The same polymers are also of interest as the electrolyte in dye-sensitized solar cells. Other polyphosphazenes with sulfonated aryloxy side groups are proton conductors of interest for use in the membranes of proton exchange membrane fuel cells.
Hydrogels
Water-soluble poly(organophosphazenes) with oligo-ethyleneoxy side chains can be cross-linked by gamma-radiation. The cross-linked polymers absorb water to form hydrogels, which are responsive to temperature changes, expanding to a limit defined by the cross-link density below a critical solution temperature, but contracting above that temperature. This is the basis of controlled permeability membranes. Other polymers with both oligo-ethyleneoxy and carboxyphenoxy side groups expand in the presence of monovalent cations but contract in the presence of di- or tri-valent cations, which form ionic cross-links. Phosphazene hydrogels have been utilized for controlled drug release and other medical applications.
Bioerodible polyphosphazenes
The ease with which properties can be controlled and fine-tuned by the linkage of different side groups to polyphosphazene chains has prompted major efforts to address biomedical materials challenges using these polymers. Different polymers have been studied as macromolecular drug carriers, as membranes for the controlled delivery of drugs, as biostable elastomers, and especially as tailored bioerodible materials for the regeneration of living bone. An advantage for this last application is that poly(dichlorophosphazene) reacts with amino acid ethyl esters (such as ethyl glycinate or the corresponding ethyl esters of numerous other amino acids) through the amino terminus to form polyphosphazenes with amino acid ester side groups. These polymers hydrolyze slowly to a near-neutral, pH-buffered solution of the amino acid, ethanol, phosphate, and ammonium ion. The speed of hydrolysis depends on the amino acid ester, with half-lives that vary from weeks to months depending on the structure of the amino acid ester. Nanofibers and porous constructs of these polymers assist osteoblast replication and accelerate the repair of bone in animal model studies.
Commercial aspects
No applications are commercialized for polyphosphazenes. The cyclic trimer hexachlorophosphazene ((NPCl2)3) is commercially available. It is the starting point for most commercial developments. High performance elastomers known as PN-F or Eypel-F have been manufactured for seals, O-rings, and dental devices. An aryloxy-substituted polymer has also been developed as a fire resistant expanded foam for thermal and sound insulation. The patent literature contains many references to cyclomatrix polymers derived from cyclic trimeric phosphazenes incorporated into cross-linked resins for fire resistant circuit boards and related applications.
References
Further information
Inorganic polymers
Phosphazenes | Polyphosphazene | Chemistry | 1,811 |
41,980,813 | https://en.wikipedia.org/wiki/Mem%20%28computing%29 | In computing, mem is a measurement unit for the number of memory accesses used or needed by a process, function, instruction set, algorithm or data structure. Mem has applications in computational complexity theory, computing efficiency, combinatorial optimization, supercomputing, computational cost (algorithmic efficiency) and other computational metrics.
Example usage, when discussing processing time of a search tree node, for finding 10 × 10 Latin squares: "A typical node of the search tree probably requires about 75 mems (memory accesses) for processing, to check validity. Therefore the total running time on a modern computer would be roughly the time needed to perform mems." (Donald Knuth, 2011, The Art of Computer Programming, Volume 4A, p. 6).
Reducing mems as a speed and efficiency enhancement is not a linear benefit, as it trades off increases in ordinary operations costs.
PFOR compression
This optimization technique also is called PForDelta
Although lossless compression methods like Rice, Golomb and PFOR are most often associated with signal processing codecs, the ability to optimize binary integers also adds relevance in reducing MEMS tradeoffs vs. operations. (See Golomb coding for details).
See also
CAS latency
Clock signal
Clock rate
Computer performance
Instructions per second
Memoization
References
Breaking the Wall of the Quantum Computing Hype - MemComputing, Inc.
Analysis of algorithms
Computer performance
Software optimization | Mem (computing) | Technology | 293 |
62,060,268 | https://en.wikipedia.org/wiki/Dutch%20Furniture%20Awards | The Dutch Furniture Awards is a former annual furniture design competition in the Netherlands, organized from 1985 to 1998. This was an initiative of the Jaarbeurs Utrecht and the Vereniging van Vakbeurs Meubel (VVM).
Overview
This design prize was awarded annually. In 1985 it started with three prices for furniture designs: the Award for the best Dutch furniture design, the Style prize, and the Furniture of the year. In the following year a fourth prize was introduced, the Prize for Young Designers. In recent years, the Style for Industrial Product Quality replaced the style prize.
In addition to a main prize, each category has already been awarded one or more honorable mentions each year. Also, with some regularity, a grand prize was not awarded in a certain category if the jury felt that the product quality in that particular category had not been sufficient that year.
The entries of the Dutch Furniture Awards were exhibited annually. This was for a longer time at an annual International Furniture Fair in the Utrecht Fair. In 1997 this was at the Kunsthal Rotterdam, and at the Woonbeurs in the Prins Bernhardhoeve in Zuidlaren. In 1998 the ceremony took place in the Naardense Promerskazerne. The last presentation in 1999 took place again in the Jaarbeurs during the Interdecor home exhibition in Utrecht.
The jury
The jury usually consisted of three people per category with a well-known designer and a furniture manufacturer, regularly supplemented by past prize winners. Known permanent judges were Sem Aardewerk, Willem van Ast, Gerard van den Berg, Jan des Bouvrie, Rob Eckhardt, Ton Haas and Jan Pesman.
Other jury members included Thijs Asselbergs in 1985, and Karel Boonzaaijer
Award winners 1985-1999
See also
Dutch Design
Dutch Design Awards
Dutch design week
Rotterdam Design Award
References
Design awards
Dutch awards
Dutch design
1985 establishments in the Netherlands
Awards established in 1985 | Dutch Furniture Awards | Engineering | 401 |
6,153,993 | https://en.wikipedia.org/wiki/Tapai | Tapai (also tapay or tape) is a traditional fermented preparation of rice or other starchy foods, and is found throughout much of Southeast Asia, especially in Austronesian cultures, and parts of East Asia. It refers to both the alcoholic paste and the alcoholic beverage derived from it. It has a sweet or sour taste
and can be eaten as is, as ingredients for traditional recipes, or fermented further to make rice wine (which in some cultures is also called tapai). Tapai is traditionally made with white rice or glutinous rice, but can also be made from a variety of carbohydrate sources, including cassava and potatoes. Fermentation is performed by a variety of moulds including Aspergillus oryzae, Rhizopus oryzae, Amylomyces rouxii or Mucor species, and yeasts including Saccharomyces cerevisiae, and Saccharomycopsis fibuliger, Endomycopsis burtonii and others, along with bacteria.
Etymology
Tapai is derived from Proto-Malayo-Polynesian *tapay ("fermented [food]"), which in turn is derived from Proto-Austronesian *tapaJ ("fermented [food]"). Derived cognates has come to refer to a wide variety of fermented food throughout Austronesia, including yeasted bread and rice wine. Proto-Malayo-Polynesian *tapay-an also refers to large earthen jars originally used for this fermentation process. Cognates in modern Austronesian languages include tapayan (Tagalog), tapayan (Maguindanaon), tepayan (Iban), and tempayan (Javanese and Malay).
Starter culture
Tapai is made by inoculating a carbohydrate source with the required microorganisms in a starter culture. This culture has different names in different regions, shown in the table below. The culture can be naturally captured from the wild, by mixing rice flour with ground spices (include garlic, pepper, chili, cinnamon), cane sugar or coconut water, slices of ginger or ginger extract, and water to make a dough. The dough is pressed into round cakes, about 3 cm across and 1 cm thick, and left to incubate on trays with banana leaves under and over them for two to three days. They are then dried and stored, ready for their next use.
Preparation
Traditional
Traditionally, cooked white rice or glutinous rice are fermented in tapayan jars. Depending on the length of time and various processes, tapai will result in a large number of end products. These include slightly fermented dough used for rice cakes (Filipino galapong); dried fermented cakes (Indonesian brem cakes); fermented cooked rice (Filipino buro, tapay, inuruban, binubudan, binuboran; Indonesian/Malaysian tapai or tape); fermented rice with shrimp (Filipino buro, balaobalao, balobalo, tag-ilo); fermented rice with fish (Filipino buro); or various rice wines (Filipino tapuy, tapey, bubod, basi, pangasi; Indonesian brem wine).
Modern
Fermented rice gruel/paste
In modern times, in addition to rice, different types of carbohydrates such as cassava or sweet potatoes can also be used. The general process is to wash and cook the target food, cool to about 30 °C, mix in some powdered starter culture, and rest in covered jars for one to two days. With cassava and sweet potato, the tubers are washed and peeled before cooking, then layered in baskets with starter culture sprinkled over each layer. The finished gruel will taste sweet with a hint of alcohol, and can be consumed as is, or left for several days more to become more sour.
Rice wine
Uses in cuisine
Indonesia
Tapai and its variants are usually consumed as it is; as sweet mildly-alcoholic snacks, to accompany tea in the afternoon. The sweet fermented tapai however, are often used as the ingredient in a recipe of certain dishes. Sundanese cassava peuyeum is the main ingredient for colenak; a roasted fermented cassava tapai served with kinca a sweet syrup made of grated coconut and liquid palm sugar. Colenak is Sundanese portmanteau of dicocol enak which translates to "tasty dip". Tapai uli is a roasted block of bland-tasted ketan or pulut (glutinous rice) served with sweet tapai ketan or tapai pulut. The peuyeum goreng or tapai goreng, or known in Javanese as rondho royal is another example of Indonesian gorengan (assorted fritters), which is deep fried battered cassava tapai.
In beverages, tapai, both cassava or glutinous rice, might be added into sweet iced concoction desserts, such as es campur and es doger.
Philippines
In the Philippines, there are various tapay-derived dishes and drinks. They were originally referred to by the term tinapay (literally "done through tapay), as recorded by Antonio Pigafetta. But the term tinapay is now restricted to "bread" in modern Filipino languages. The most common use of fermented rice is in galapong, a traditional Filipino viscous rice dough made by soaking (and usually fermenting) uncooked glutinous rice overnight and then grinding it into a paste. It is used as a base for various kakanin rice cakes (notably puto and bibingka). Fermented gruel-type tapay are also common, with various ethnic groups having their own versions like Tagalog and Kapampangan buro, the Ifugao binuburan, and the Maranao and Maguindanao tapay. These are usually traditionally fermented with or paired with fish or shrimp (similar to Japanese narezushi), as in burong isda, balao-balao, or tinapayan. Rice wines derived from tapay include the basi of Ilocos and the tapuy of Banaue and Mountain Province. Tapuy is itself the end product of binuburan allowed to ferment fully.
See also
References
External links
Dominic Anfiteatro's page on Asian cultured foods
Fermented foods
Bruneian cuisine
East Timorese cuisine
Indonesian snack foods
Malaysian snack foods
Filipino cuisine
Cassava dishes | Tapai | Biology | 1,394 |
2,593,847 | https://en.wikipedia.org/wiki/Gaisi%20Takeuti | was a Japanese mathematician, known for his work in proof theory.
After graduating from Tokyo University, he went to Princeton to study under Kurt Gödel.
He later became a professor at the University of Illinois at Urbana–Champaign. Takeuti was president (2003–2009) of the Kurt Gödel Society, having worked on the book Memoirs of a Proof Theorist: Godel and Other Logicians. His goal was to prove the consistency of the real numbers. To this end, Takeuti's conjecture speculates that a sequent formalisation of second-order logic has cut-elimination. He is also known for his work on ordinal diagrams with Akiko Kino.
Publications
2013 Dover reprint
Notes
External links
Presidents of the Kurt Gödel Society
Takeuti Symposium (contains relevant birthdate information)
1926 births
2017 deaths
Japanese logicians
20th-century Japanese philosophers
21st-century Japanese philosophers
Proof theorists
University of Tokyo alumni
University of Illinois Urbana-Champaign faculty | Gaisi Takeuti | Mathematics | 192 |
24,729,735 | https://en.wikipedia.org/wiki/C8H3ClO3 | The molecular formula C8H3ClO3 (molar mass: 182.56 g/mol, exact mass: 181.9771 u) may refer to:
3-Chlorophthalic anhydride
4-Chlorophthalic anhydride
Molecular formulas | C8H3ClO3 | Physics,Chemistry | 65 |
32,799 | https://en.wikipedia.org/wiki/Variable-star%20designation | In astronomy, a variable-star designation is a unique identifier given to variable stars. It extends on the Bayer designation format, with an identifying label (as described below) preceding the Latin genitive of the name of the constellation in which the star lies. The identifying label can be one or two Latin letters or a V plus a number (e.g. V399). Examples are R Coronae Borealis, YZ Ceti, V603 Aquilae. (See List of constellations for a list of constellations and the genitive forms of their names.)
Naming
The current naming system is:
Stars with existing Greek letter Bayer designations are not given new designations.
Otherwise, start with the letter R and go through Z.
Continue with RR–RZ, then use SS–SZ, TT–TZ and so on until ZZ.
After ZZ return to the beginning of the Latin alphabet and use AA–AZ, BB–BZ, CC–CZ, and so on, until reaching QZ, but omitting the letter J in either first or second position.
Abandon the Latin letters after all 334 combinations of letters and start naming stars with V335, V336, and so on.
The second letter is never nearer the beginning of the alphabet than the first, e.g., no star can be BA, CA, CB, DA and so on.
History
In the early 19th century few variable stars were known, so it seemed reasonable to use the letters of the Latin script. Because very few constellations contained stars with uppercase Latin-letter Bayer designation greater than Q, the letter R was chosen as a starting point so as to avoid confusion with letter spectral types or the (now rarely used) Latin-letter Bayer designations. Although Lacaille had used uppercase R–Z letters in a few cases, for example X Puppis (HR 2548), these designations were either dropped or accepted as variable star designations. The star T Puppis was accepted by Argelander as a variable star and is included in the General Catalogue of Variable Stars with that designation but is now classed as non-variable.
This variable star naming convention was developed by Friedrich W. Argelander. There is a widespread belief according to which Argelander chose the letter R for German rot or French rouge, both meaning "red", because many variable stars known at that time appear red. However, Argelander's own statement disproves this.
By 1836, even the letter S had only been used in one constellation, Serpens. With the advent of photography the number of variables piled up quickly, and variable star names soon fell into the Bayer-trap of reaching the end of the alphabet while still having stars to name. After two subsequent supplementary double-lettering systems hit similar limits, numbers were finally introduced.
As with all categories of astronomical objects, names are assigned by the International Astronomical Union (IAU). The IAU delegates the task to the Sternberg Astronomical Institute and the Institute of Astronomy of the Russian Academy of Sciences in Moscow, Russia. Sternberg publishes the General Catalog of Variable Stars (), which is amended approximately once every two years by the publication of a new Name-List of Variable Stars. For example, in December 2011, the 80th Name-List of Variable Stars, Part II, was released, containing designations for 2,161 recently discovered variable stars, which brought the total number in the to 45,678 variable stars. Among the newly designated objects were V0654 Aurigae, V1367 Centauri, and BU Coronae Borealis.
Footnotes
See also
star catalogue
star designation
References
Further reading
designation
Stellar astronomy | Variable-star designation | Astronomy | 762 |
30,966,530 | https://en.wikipedia.org/wiki/National%20Data%20Repository | A National Data Repository (NDR) is a data bank that seeks to preserve and promote a country's natural resources data, particularly data related to the petroleum exploration and production (E&P) sector.
A National Data Repository is normally established by an entity that governs, controls and supports the exchange, capture, transference and distribution of E&P information, with the final target to provide the State with the tools and information to assure the growth, govern-ability, control, independence and sovereignty of the industry.
The two fundamental reasons for a country to establish an NDR are to preserve data generated inside the country by the industry, and to promote investments in the country by utilizing data to reduce the exploration, production, and transportation business risks.
Countries take different approaches towards preserving and promoting their natural resources data. The approach varies according to a country's natural resources policies, level of openness, and its attitude towards foreign investment.
Data types
NDRs store a vast array of data related to a country's natural resources. This includes wells, well log data, well reports, core samples, seismic surveys, post-stack seismic, field data/tapes, seismic (acquisition/processing) reports, production data, geological maps and reports, license data and geological models.
Funding models
Some NDRs are financed entirely by a country's government. Others are industry-funded. Still some are hybrid systems, funded in part by industry and government.
NDRs typically charge fees for data requests and for data loading. The cost differs significantly between countries. In some cases an annual membership is charged to oil companies to store and access the data in the NDR.
Standards body
Energistics is the global energy standards resource center for the upstream oil and gas industry.
Energistics National Data Repository Work Group:
The standards body is Energistics.
Energistics-standards-directory
Global regulators of upstream oil and natural gas information, including seismic, drilling, production and reservoir data, formed the National Data Repository (NDR) Work Group in 2008 to collaborate on the development of data management standards and to assist emerging nations with hydrocarbon reserves to better collect, maintain and deliver oil and gas data to the public and to the industry.
Ten countries, led by the Netherlands, Norway and the United Kingdom, formed NDR to share best practices and to formalize the development and deployment of data management standards for regulatory agencies. The other countries involved in the NDR Work Group's formation are Australia, Canada, India, Kenya, New Zealand, South Africa and the United States.
Annual NDR Conference: Approximately every 18 months Energistics organizes a National Data Repository Conference. The purpose is to provide government and regulatory agencies from around the world an opportunity to attend a series of workshops dedicated to developing data exchange standards, improving communications with the oil and gas industry and learning data management techniques for natural resources information.
Society of Exploration Geophysicists and The International Oil and Gas Producers Association
The SEG is the custodian of the SEG standards which are used for the exchange, retention and release of seismic data. They are commonly used by National Data Repositories with the SEGD and SEGY being the field and processed exchange standards respectively.
NDRs around the world
Click here to see a map of the NDRs around the world
See also
Energistics
Norwegian Petroleum Directorate
Oil and Gas Authority
Oil and gas industry in the United Kingdom
Petroleum exploration in Guyana
Professional Petroleum Data Management Association (PPDM)
Notes
External links
Energistics: National Data Repository Work Group
National Data Repositories: the case for open data in the oil and gas industry
Society of Exploration Geophysicists
Data management
Open standards
Hydrocarbons
Geophysics organizations | National Data Repository | Chemistry,Technology | 764 |
3,711,806 | https://en.wikipedia.org/wiki/Tricarbon | Tricarbon (systematically named 1λ2,3λ2-propadiene and catena-tricarbon) is an inorganic compound with the chemical formula (also written [C(μ-C)C] or ). It is a colourless gas that only persists in dilution or solution as an adduct. It is one of the simplest unsaturated carbenes. Tricarbon can be found in interstellar space and can be produced in the laboratory by a process called laser ablation.
Natural occurrence
Tricarbon is a small carbon cluster first spectroscopically observed in the early 20th century in the tail of a comet by William Huggins and subsequently identified in stellar atmospheres. Small carbon clusters like tricarbon and dicarbon are regarded as soot precursors and are implicated in the formation of certain industrial diamonds and in the formation of fullerenes.
C3 has also been identified as a transient species in various combustion reactions.
Properties
Chemical properties
The chemical properties of C3 was investigated in the 1960s by Professor Emeritus Philip S. Skell of Pennsylvania State University, who showed that certain reactions of carbon vapor indicated its generation, such as the reaction with isobutylene to produce 1,1,1',1'-tetramethyl-bis-ethanoallene.
Physical properties
The ground state molecular geometry of tricarbon has been identified as linear via its characteristic symmetric and antisymmetric stretching and bending vibrational modes and bears bond lengths of 129 to 130 picometer corresponding to those of alkenes. The ionization potential is determined experimentally at 11 to 13.5 electronvolts. In contrast to the linear tricarbon molecule, the cation is bent.
Nomenclature
The systematic names 1λ2,3λ2-propadiene, and μ-carbidodicarbon, valid IUPAC names, are constructed according to the substitutive and additive nomenclatures, respectively.
In appropriate contexts, tricarbon can be viewed as propadiene with four hydrogen atoms removed, or as propane with eight hydrogen atoms removed; and as such, propadienediylidene or propanetetraylidene, respectively, may be used as a context-specific systematic names, according to substitutive nomenclature. By default, these names pay no regard to the radicality of the tricarbon molecule. In even more specific context, these can also name the non-radical singlet state, whereas the diradical state is named propadienediylylidene, or propanediyldiylidene, and the tetraradical state is named propedienetetrayl or propanetetraylylidene.
See also
Hydrocarbons
Alkenes
List of molecules in interstellar space
Cyclopropatriene
References
Further reading
Astrochemistry
Allotropes of carbon
Homonuclear triatomic molecules | Tricarbon | Chemistry,Astronomy | 591 |
3,663,551 | https://en.wikipedia.org/wiki/Polyconic%20projection%20class | Polyconic can refer either to a class of map projections or to a specific projection known less ambiguously as the American polyconic projection. Polyconic as a class refers to those projections whose parallels are all non-concentric circular arcs, except for a straight equator, and the centers of these circles lie along a central axis. This description applies to projections in equatorial aspect.
Polyconic projections
Some of the projections that fall into the polyconic class are:
American polyconic projection—each parallel becomes a circular arc having true scale, the same scale as the central meridian
Latitudinally equal-differential polyconic projection
Rectangular polyconic projection
Van der Grinten projection—projects entire earth into one circle; all meridians and parallels are arcs of circles.
Nicolosi globular projection—typically used to project a hemisphere into a circle; all meridians and parallels are arcs of circles.
A series of polyconic projections, each in a circle, was also presented by Hans Mauer in 1922, who also presented an equal-area polyconic in 1935. Another series by Georgiy Aleksandrovich Ginzburg appeared starting in 1949.
Most polyconic projections, when used to map the entire sphere, produce an "apple-shaped" map of the world.
There are many "apple-shaped" projections, almost all of them obscure.
See also
List of map projections
References
External links
Table of examples and properties of all common projections, from radicalcartography.net
Map projections | Polyconic projection class | Mathematics | 309 |
13,573,117 | https://en.wikipedia.org/wiki/Monkland%20Railways | The Monkland Railways was a railway company formed in 1848 by the merger of three "coal railways" that had been built to serve coal and iron pits around Airdrie in Central Scotland, and connect them to canals for onward transport of the minerals. The newly formed company had a network stretching from Kirkintilloch to Causewayend, near Linlithgow. These coal railways had had mixed fortunes; the discovery of blackband ironstone and the development of the iron smelting industry around Coatbridge had led to phenomenal success, but hoped-for mineral discoveries in the moorland around Slamannan had been disappointing. The pioneering nature of the railways left them with a legacy of obsolete track and locomotives, and new, more modern, railways were being built around them.
The new company responded with connections to other lines, and to Bo'ness Harbour, and built new lines to Bathgate, but it was taken over by the Edinburgh and Glasgow Railway in 1865. Much of the network was dependent on proximity to pits and ironworks and as those became worked out or declined, the traffic on the network declined too, but the Coatbridge - Airdrie - Bathgate line remained open for passengers until 1956. The section east of Airdrie then closed, except for minor freight movements, but it was reopened in 2010, forming a through passenger route between Glasgow and Edinburgh via Airdrie and Bathgate. Part of the Bo'ness extension line was re-opened as the Bo'ness and Kinneil Railway, a heritage line. The remainder of the system has closed.
The North Monkland Railway was an independent line built to serve pits and quarries to the north of Airdrie beyond the reach of the Monkland Railways system. It opened in 1878 and was taken over in 1888, but it closed in the 1960s.
Origins: the coal railways
Monkland and Kirkintilloch Railway
In 1826 the Monkland and Kirkintilloch Railway (M&KR) opened, with the primary purpose of carrying coal from the Monklands collieries, south of Airdrie to Kirkintilloch, from where it could continue to market in Glasgow and Edinburgh over the Forth and Clyde Canal. As a pioneering railway, it adopted a track gauge of 4 ft 6 in, and at first operated as a toll line, allowing independent hauliers to move wagons, using horse traction. It later acquired steam locomotives and ran trains itself. At first it was successful, and when the iron smelting industry became a huge success within the railway's area, it became even more successful.
Ballochney Railway
As coal extraction developed, pits were opened further north and east than the M&KR reached, and the Ballochney Railway was constructed to serve some of them, running from Kipps, near Coatbridge, to pits around Arbuckle and Clarkston, and a quarry. It opened in 1828. The area it reached was on high ground, and two rope-worked inclines were necessary to gain altitude.
Garnkirk and Glasgow Railway
The Garnkirk and Glasgow Railway was opened in 1831 connecting the Monklands directly to Glasgow without the need to transshipment to a canal.
Wishaw and Coltness Railway
The Wishaw and Coltness Railway opened from 1833, connecting iron pits and works further east to Whifflet (then spelt Whifflat) for access to the Coatbridge ironworks.
Slamannan Railway
There was a large area of undeveloped moorland between Airdrie and the banks of the Forth, and a railway was promoted to develop the region. There were optimistic ideas of serving new collieries in the area, as well as the advantage of connecting Monklands to Edinburgh more directly. The Slamannan Railway opened in 1840 between Arbuckle and Causewayend, a wharf on the Union Canal; it had a rope worked incline down to the wharf. Onward transport to Edinburgh involved transshipment to canal barges.
Main line railways
The M&KR and the Ballochney companies enjoyed huge commercial success as the iron smelting industry boomed around Coatbridge, and as successful new mineral extraction started around Airdrie, although the Slamannan company's sought-for new mineral business barely materialised. The coal railways collectively worked in a loose collaboration.
At the same time new intercity railways were being promoted and suddenly the coal railways disadvantages seemed dominant. Their near monopoly of mineral traffic in very small areas now seemed to exclude them from areas where new business was being developed, emphasised by the terminating points at canal basins, requiring transshipment to get to destination. Their primitive track on stone block sleepers, their distinct track gauge of 4 ft 6 in also necessitated transshipment where they connected with the new standard gauge lines. Their obsolete locomotives, horse haulage by independent hauliers is some parts, the rope-worked inclines and the antiquated operating methods were all considerable disadvantages.
In 1842 the Edinburgh and Glasgow Railway (E&GR) opened its main line (to Haymarket at first) on the standard gauge of 4 ft 8½ in with modern locomotives. At this time the Caledonian Railway was promoting a new trunk line from Carlisle to Glasgow and Edinburgh; it got its authorising act of Parliament, the Edinburgh and Glasgow Railway Act 1845 (8 & 9 Vict. c. xci), in 1845 and opened in 1847/1848. It sought acquisition of the Wishaw and Coltness Railway and the Garnkirk and Glasgow Railway to get access to Glasgow, and it concluded a lease of those lines. Suddenly those lines were out of the group of mutually friendly coal railways, and soon they were simply part of the Caledonian Railway.
The three other coal railways (M&KR, Ballochney and Slamannan) decided that their interests lay in collaboration, and they formed a joint working arrangement from 29 March 1845; in effect the three companies worked as one.
In 1844 the M&KR had built a short spur to transshipment sidings with the E&GR at Garngaber, a little east of the present-day Lenzie station. The inconvenience of the transshipment emphasised the disadvantage of the now non-standard track gauge, and it was decided to change the track gauge to standard gauge. They got Parliamentary authority and made the change on 26 July and 27 July 1847.
Operating costs were high: from 1845 to 1848 the ratio for the three railways that formed the Monkland Railways averaged 55%. Giving evidence at the hearing of the Monklands Amalgamation Bill in 1848, George Knight, secretary and General Manager of the three railways explained that:
The Monklands complex consisted of 36 miles of railway proper and 12 miles of sidings, and had connected it with another 48 miles of private railways built by the various extractive and industrial interests. Although a through journey of 25 miles was possible on the system—from the eastern end of the Slamannan to the Kirkintilloch canal basin—30% of all traffic travelled less than a mile, and half of it less than 2½ miles. Hence locomotives were involved in a ceaseless pattern of stopping and shunting, and averaged only 24 miles per day against the 90 miles normal on the Edinburgh & Glasgow.The sidings were expensive to work, and even private sidings required main line points which had to be renewed every three or four years ... these numerous points also meant the employment of a large number of men to supervise them. Traders could also benefit from using the company's waggons, and were not charged for their use on sidings and private lines. [The waggons] averaged only 5¼ miles per day against 23 miles on the Edinburgh & Glasgow.
Formal merger
In 1846 it became clear that the E&GR directors favoured a purchase of the coal railways, giving it immediate access to the collieries and ironworks, and gaining possession of the territory against newly promoted lines. Such a sale appeared at first to please everyone, but Lancashire shareholders in the E&GR felt that the terms of such a takeover were too favourable to the small Scottish lines, and a major row broke out in the E&GR: the scheme was dropped. In this period, numerous other railways were promoted and alliances seemed to be formed and abandoned quickly, but the only large newcomers were the E&GR and the Caledonian Railway.
Having been rebuffed by the E&GR, the Monkland companies decided upon a formal merger, and obtained the necessary sanction by an act of Parliament, the (11 & 12 Vict. c. cxxxiv) on 14 August 1848. The new Monkland Railways Company was formed with a nominal share capital of £329,880, the sum of the capital of the three former companies; the shares were converted as follows:
Monkland and Kirkintilloch Railway £25 shares converted to £22 16s 0d in Monkland Railways shares
Ballochney Railways Railway £25 shares converted to £40 10s 10d in Monkland Railways shares
Slamannan Railway Railway £50 shares converted to £22 15s 10d in Monkland Railways shares.
With revenue of about £100,000 annually it was a profitable concern.
New lines
Slamannan Junction Railway
The Slamannan Railway terminated at Causewayend, a wharf on the Union Canal. This was close to the new E&GR main line, and a connection seemed desirable. An independent company, the Slamannan Junction Railway, was formed to build the link; the submission to Parliament for an act of Parliament was supported financially by the E&GR and the Monkland joint companies together. In fact its shareholders sold the company to the E&GR immediately after obtaining the enabling act of Parliament, the Slamannan Junction Railway Act 1844 (7 & 8 Vict. c. lxx), and the E&GR built the line from Bo'ness Junction (later renamed Manuel High Level) on the E&GR main line to Causewayend. The short line was completed by January 1847, but remained dormant until the Monkland lines altered their line to standard gauge, in August 1847.
Bo'ness
The harbour at Borrowstounness (Bo'ness) was also not far from Causewayend, and a connection to it was desirable, enabling export and coastwise mineral trade. In addition there were ironstone pits and blast furnaces at Kinneil. The nominally independent Slamannan and Borrowstounness Railway (S&BR) had been promoted by the Slamannan company to connect to Bo'ness Harbour, with a link to the E&GR west of Bo'ness Junction (later Manuel) so aligned as to allow through running from the Polmont direction to Bo'ness. The unbuilt line was absorbed into the Monkland Railways at the time of formation of that company, but the subscribed capital of £105,000 was to be kept separate. The Slamannan and Borrowstounness Railway Act 1846 (9 & 10 Vict. c. cvii) of 26 June 1846 specified that the Union Canal was to be crossed by a drawbridge or swing bridge, and that screens were to be provided to avoid frightening horses drawing barges on the canal. In fact the E&GR made considerable difficulties over the construction of the new bridge to pass the S&BR line under their own main line, and construction was delayed until 1848. With a resumption of friendly relations, it now appeared that some construction could be avoided if Slamannan to Bo'ness trains used the Slamannan Junction line to Bo'ness Junction on the E&GR and then the proposed Bo'ness Junction connection towards Bo'ness, so that trains would join and then immediately leave the E&GR main line.
In 1850, as construction was progressing, it was belatedly realised that the configuration of the junctions on the E&GR main line was such that a through movement would be impossible; trains would have to shunt back on the E&GR main line. In addition the E&GR made stipulations about the composition of the Monkland wagon wheels which were impracticable to comply with. Accordingly, the Monkland Railways decided (in May 1850) to complete the originally intended through line from Causewayend after all. The E&GR took umbrage at this and put further difficulties in the way of the underbridge construction and disputation dragged on until May 1851. The Monkland Railways now got a fresh act of Parliament, the (14 & 15 Vict. c. lxii) authorising some deviations of the new line, and the substitution of a fixed bridge over the Union Canal.
The approach to Bo'ness Harbour itself was to be along the foreshore there, and the company was obliged to build a promenade on the sea side of the railway line there. John Wilson, the proprietor of important iron works at Kinneil obtained permission to run some mineral trains there while the line was still under construction, and the first trains ran from Arden on 17 March 1851, but opening from the E&GR line at Bo'ness Junction (Manuel) took place in early August 1851, with the undesirable backshunt on the E&GR main line now apparently permitted. Full opening of the through line took place on 22 December 1851.
Passenger traffic started, after some difficulties in obtaining approval, on 10 June 1856.
Bathgate
The Bathgate Chemical Works was established in 1851, in open country a mile or so south of the town. James Young, an industrial chemist, had developed an industrial process of manufacturing paraffin from torbanite, a type of oil shale. He had obtained a patent for the process in October 1850, and the torbanite had been discovered on the Torbanehill estate, about halfway between Bathgate and Whitburn. Young joined in partnership with Edward William Binney and Edward Meldrum and the Bathgate works started operations in February 1851. It was located alongside the Wilsontown, Morningside and Coltness Railway (WM&CR) on its branch to Bathgate.
The chemical works, the torbanite fields, and the coal deposits in the area generally were attractive as a source of revenue for the Monkland Railways, and they obtained an act of Parliament, the (16 & 17 Vict. c. xc) giving powers in July 1853 to construct a railway from Blackstone (often spelt Blackston) on the Slamannan line just east of Avonbridge to the WM&CR line near Boghead. Boghead is immediately south of Bathgate, and the new line would pass through the torbanite fields, but skirt past Bathgate and join the WM&CR facing away from the town, but towards the Works. In addition, a branch from the WM&CR to Armadale Toll and to Cowdenhead (about a mile west of Armadale town, later Woodend Junction, to collieries) was authorised.
A train of coal wagons passed along the Bathgate branch on 11 June 1855, apparently while the line was still in the possession of the contractors. The company applied for authority to run passenger trains to Bathgate; this was repeatedly refused: there were no platforms nor a turntable at Bathgate, nor any signalling there or at Blackstone. The Board of Trade Inspector visited the line in 1856 to review the proposals for passenger operation; he reported that there was no turntable at Bathgate, but that one had been ordered. He continued:
The Bathgate and Bo'ness [routes] form a junction at Blackstone; from thence the traffic of the two branches will be conducted separately along the single line common to both, as far as Avon Bridge, a distance of three-quarters of a mile, then they will be united in one train, and proceed to Glasgow. To prevent any danger along the portion of line common to the two branches, the Bathgate train, both in going and returning, will have the precedence: the signal man at Blackstone will have instructions not to turn off the signal of the Boness branch until the Bathgate train has passed on its way to Avon-Bridge; of the train proceeding to Bathgate and Boness, the latter will follow the Bathgate train at an interval not less than five minutes.
The turntable was provided, and Monkland Railways passenger operation to Bathgate started on 7 July 1856. The Bathgate station was at the end of Cochrane Street, and later became Bathgate Lower station.
Calderbank
The 1853 act also gave authority for a branch from Colliertree, near Rawyards, southwards to Brownsburn, where the Calderbank Iron Works would join it with an internal private railway. The Monkland Railways portion was to be 1 mile 32 chains (2.3 km). The mineral line was opened on 1 October 1855. (Some contemporary maps misleadingly refer to the Clarkston line at Rawyards as "the Brownsburn Branch".)
Closing the gap
The Monkland Iron and Steel Company had extensive mineral workings in the Armadale area at Cowdenhead, now connected to the extension from Bathgate, and their iron works was at Calderbank, near Airdrie. There was immediately a considerable traffic from the mines to the works, and it made a long detour, starting eastwards from Armadale, away from the direction of Calderbank, and then round via Slamannan. The company observed that the gap of ten miles could be closed relatively cheaply, and a direct line would also connect worthwhile coalfields on the way, as well as the important paper works at Caldercruix. An act of Parliament, the (20 & 21 Vict. c. lxxviii) was obtained for the purpose in July 1857 in the teeth of considerable opposition from rival promoters and others.
The act authorised a large number of branch connections and other lines, and these were constructed in priority order, with the central part of the through connection delayed.
First was a short westwards extension from Cowdenhead to Standhill Junction, and from there turning back to Craigmill (otherwise known as the Woodend Branch), opened on 1 November 1858, to serve the Coltness Iron Company's mineral workings there. Similarly a short eastwards extension was made from a junction to the Clarkston Wester Monkland branch back to Stepends, with a short branch there for Wilson & Co of Summerlee Iron Works. Wilson built an internal network with a zigzag to gain height on Annies Hill. A further branch turned back from Barblues to Meadowhead Pit. The pit was close to the Ballochney workings, but the location was referred to then as Planes, later spelt Plains. These extensions were completed by early February 1860. However the Stepends branch was short lived: it closed in 1878.
That left two sections. The first was the gap from Barblues (sometimes spelt Barbleus, near Stepends) to Standhill Junction (near Blackridge; the junction was with the uncompleted Shotts Iron Works line (below), and that was completed by 27 April 1861 when a trial mineral train passed over the line; full opening to mineral trains was about 10 May 1861. This enabled through running from Coatbridge to Bathgate, but over the Ballochney inclines and running north of Airdrie.
The second gap was the line south of Airdrie, from Sunnyside Junction to Brownieside Junction, avoiding the rope worked inclines. This may have opened, also for mineral traffic only, in early August 1861.
Passenger working between Coatbridge and Bathgate started on 11 August 1862; however there was no direct route to Glasgow yet, except over the former Garnkirk railway Caledonian section.
The New Line is sometimes referred to as the Bathgate and Coatbridge Railway, but it was never independent of the Monkland Railways. However an independent Bathgate, Airdrie and Coatbridge Railway had been proposed in 1856.
Shotts iron works
The important iron works at Shotts was connected to the Wilsontown, Morningside and Coltness Railway but the works owner obviously wanted an alternative carrier, and approached the Monklands company to propose a branch line southwards from the "new line". This was agreed to, and an act of Parliament, the (23 & 24 Vict. c. clxxviii) giving authority for the -mile line was obtained in August 1860. The line opened by 5 February 1862. A short branch off the branch to West Benhar was built in 1864.
Absorbed by the E&GR
The Monkland Railways Company was absorbed by the Edinburgh and Glasgow Railway by the Edinburgh and Glasgow and Monkland Railways Amalgamation Act 1865 (28 & 29 Vict. c. ccxvii), dated 5 July 1865, on 31 July 1865. The following day, that company was itself absorbed by the North British Railway.
The larger company used the acquisition to consolidate its dominance of mineral traffic in the Monklands coalfield and in connection with the iron works in the area. The Monklands section it had acquired was profitable, although its operating costs were very high, and it was concentrated in mining areas generally remote from the large population centres. However the best of the mineral deposits had been worked out, and the focus of the extractive industries had shifted into Caledonian Railway territory.
The North British Railway set about rectifying the lack of good connection to Glasgow, and in 1871 the Coatbridge to Glasgow line was opened, from Whifflet. For the time being the Glasgow terminal was inconveniently located at College, later High Street, but the growth of daily travel to work by suburban train motivated the NBR to work towards a better network in the city. The Airdrie terminal of the Ballochney Railway (Hallcraig Street) was closed to passengers in 1870.
North Monkland Railway
Coal extraction continued to flourish in the second half of the nineteenth century, and new pits opened throughout the Monklands area. Many of these were remote from the network of the Monklands section of the North British Railway, and many private mineral branch lines and tramways were built to close the gaps. Quarrying was also an important activity.
A new railway was promoted to reach some of the pits and quarries north of the Ballochney and Slamannan lines, and the North Monkland Railway got an authorising act of Parliament, the (35 & 36 Vict. c. xci) on 18 July 1872. The line was opened on 18 February 1878, and carried goods and mineral traffic only. It ran from Kipps via Nettlehole and Greengairs, to join the Slamannan line at Southfield Row, an existing colliery spur south of Longriggend.
It connected into numerous collieries on the route, and many short mineral lines were built off the main line to connect the pits.
The line sold itself to the North British Railway effective from 31 July 1888, the £10 shares being bought out at £6 each.
The twentieth century
The Monkland Railways were now just a network of branches of the North British Railway, concentrating on serving collieries and ironworks, and the communities that built up around them. The through Bathgate - Airdrie - Coatbridge line became an important secondary line for passengers and freight.
However many of the more remote localities were dependent on the mineral activity they served, and after World War I there was some geological exhaustion as well as competition from cheap foreign imports. This intensified after World War II, by which time the North British Railway had formed a constituent of the London and North Eastern Railway in 1923, and then been nationalised into the Scottish Region of British Railways in 1948. Now many of the pits and ironworks were declining substantially or closing, and the mineral branches closed with them.
The Rosehall branch had already closed in 1930, and the Slamannan line, passing through remote and thinly populated territory, closed in 1949. The Cairnhill line closed in the 1950s.
The communities of Airdrie and Coatbridge continued to flourish, enhanced by other economic activity associated with the West of Scotland, but the through line from Airdrie to Bathgate closed to passenger traffic in 1956.
A limited goods service continued on the line until l February 1982 but the line then closed completely, except for the short section from Airdrie to Moffat Mills, which remained open for goods traffic; however this was sporadic.
The Benhar mines, the branch network based on the Westcraigs to Shotts Iron Works branch, closed in 1963, and the North Monkland section closed the following year, together with the Bathgate to Blackston Junction line. The original line to Kirkintilloch closed in 1965 except for a short section to Leckethall Siding, which continued until 1982. The Ballochney section closed in 1966.
Reopening
When the Airdrie to Bathgate section closed to goods traffic, a short stub was left at Airdrie to Moffat Mills. Although officially "open" it was in fact dormant for many years. As passenger suburban travel in Greater Glasgow experience a revival, a short extension along this line to a Drumgelloch station, on the eastern margin of Airdrie, was electrified and opened, in 1989.
The line onward from Drumgelloch to Bathgate was reopened on 12 December 2010 as an electrified railway with a frequent passenger service between Edinburgh and Glasgow. This proved remarkably successful. Difficult weather prevented immediate opening of all the intermediate stations, and Armadale opened on 4 March 2011, followed by a new Drumgelloch station, further east than the earlier one and close to the former Clarkston station site, on 6 March 2011.
Current operations
The largest section of the Monkland Railways network now in operation is the line between Coatbridge and Bathgate; it carries (2015) a well-patronised fifteen-minute interval passenger service between Helensburgh and Milngavie, and Edinburgh.
The north-south line between Gartsherrie and Whifflet carries freight, and the Gartsherrie to Garnqueen section carries a passenger service to Cumbernauld, the remnant of the earlier anomaly where Caledonian express trains used this North British Railway section.
The remainder of the network is closed. The Ballochney inclines in the Airdrie area are still easy to identify, and the moorland area of the Slamannan line is relatively undeveloped, except nearer Airdrie where extensive open-cast mining has obliterated any remaining trace of the railway.
References
Sources
North British Railway
Mining railways
Early Scottish railway companies
Pre-grouping British railway companies
Closed railway lines in Scotland
Beeching closures in Scotland
Railway companies established in 1848
Railway companies disestablished in 1865
Standard gauge railways in Scotland
British companies disestablished in 1865
British companies established in 1848
Coal in Scotland | Monkland Railways | Engineering | 5,461 |
15,459,948 | https://en.wikipedia.org/wiki/Movile%20Cave | Movile Cave () is a cave near Mangalia, Constanța County, Romania discovered in 1986 by Cristian Lascu a few kilometers from the Black Sea coast. It is notable for its unique groundwater ecosystem abundant in hydrogen sulfide and carbon dioxide, but low in oxygen. Life in the cave has been separated from the outside for the past 5.5 million years and it is based completely on chemosynthesis rather than photosynthesis.
Similar caves where life partly or fully depends on chemosynthesis have been found in Ein-Nur Cave and Ayalon Cave (Israel), Frasassi Caves (Italy), Melissotrypa Cave (Elassona municipality, Greece), Tashan Cave (Iran), caves in the Sharo-Argun Valley in the Caucasus Mountains, Lower Kane Cave and Cesspool Cave (Wyoming and Alleghany County, VA, USA), and Villa Luz Cave (Mexico).
Description
Movile Cave is a network of paths in limestone that are approximately long, with portions that are partially or fully submerged by hydrothermal waters. The temperature of the air and water is a constant 21°C (70°F) and the relative humidity is about 100%. Access to the cave is limited to a few researchers per year, to minimize external impact on the delicate ecosystem.
Chemical environment
The air in the cave is very different from the outer atmosphere. The level of oxygen is only a third to half of the concentration found in open air (7–10% O2 in the cave atmosphere, compared to 21% O2 in air), and about one hundred times more carbon dioxide (2–3.5% CO2 in the cave atmosphere, versus 0.04% CO2 in air). It also contains 1–2% methane (CH4) and both the air and waters of the cave contain high concentrations of hydrogen sulfide (H2S) and ammonia (NH3). The water in the lake only contains dissolved oxygen for the first centimeter, at most, and in some places only the first millimeter. Deeper down the lake water becomes completely anoxic.
Biology
The cave is known to contain 57 animal species, among them leeches, spiders, pseudoscorpions, woodlice,
a centipede (Cryptops speleorex),
a water scorpion (Nepa anophthalma), and also a snail.
Of these, 37 are endemic.
The food chain is based on chemosynthesis by methane- and sulfur-oxidizing bacteria, which in turn release nutrients for other bacteria, and fungi. This forms microbial mats on the cave walls and the surface of lakes and ponds, which are grazed on by some of the animals. The grazers are then preyed on by predatory species. Nepa anophthalma is the only known cave-adapted water scorpion in the world. While animals have lived in the cave for 5.5 million years, not all of them arrived simultaneously. One of the most recent animals recorded is the cave's only species of snail, Heleobia dobrogica, which has inhabited the cave for slightly more than 2 million years.
Access
The cave is closed to the general public and only a few researchers are permitted inside each year, in order to minimize disturbance to the fragile ecosystem.
See also
, proposed worldwide biome supporting similar ecosystems
Hydrothermal vent microbial communities
Subterranean fauna
Troglofauna, small animals living in caves
Stygofauna, fauna living in groundwater and aquifers
References
General references
Jean Balthazar: Grenzen unseres Wissens. Orbis Verlag, München 2003, Seite 268, .
Inline citations
External links
The Movile Cave Project in the Internet Archive
La Grotte de Movile (fr.) in the Internet Archive
Life in Hell – Survivors of Darkness by Mona Lisa Production, France
Caves of Romania
Geography of Constanța County
Ecosystems
Endemism
Limestone caves | Movile Cave | Biology | 816 |
75,433,719 | https://en.wikipedia.org/wiki/Dronabinol/acetazolamide | Dronabinol/acetazolamide (investigational name IHL-42X) is a combination therapy under investigation for sleep apnea. It is developed by Incannex Healthcare.
References
Combination drugs | Dronabinol/acetazolamide | Chemistry | 45 |
149,082 | https://en.wikipedia.org/wiki/L%20game | The L game is a simple abstract strategy board game invented by Edward de Bono. It was introduced in his book The Five-Day Course in Thinking (1967).
Description
The L game is a two-player game played on a board of 4×4 squares. Each player has a 3×2 L-shaped tetromino, and there are two 1×1 neutral pieces.
Rules
On each turn, a player must first move their L piece, and then may optionally move either one of the neutral pieces. The game is won by leaving the opponent unable to move their L piece to a new position.
Pieces may not overlap or cover other pieces, or let the pieces off the board. On moving the L piece, it is picked up and then placed in empty squares anywhere on the board. It may be rotated or even flipped over in doing so; the only rule is that it must end in a different position from the position it started—thus covering at least one square it did not previously cover. To move a neutral piece, a player simply picks it up then places it in an empty square anywhere on the board.
Strategy
One basic strategy is to use a neutral piece and one's own piece to block a 3×3 square in one corner, and use a neutral piece to prevent the opponent's L piece from swapping to a mirror-image position. Another basic strategy is to move an L piece to block a half of the board, and use the neutral pieces to prevent the opponent's possible alternate positions.
These positions can often be achieved once a neutral piece is left in one of the eight killer spaces on the perimeter of the board. The killer spaces are the spaces on the perimeter, but not in a corner. On the next move, one either makes the previously placed killer a part of one's square, or uses it to block a perimeter position, and makes a square or half-board block with one's own L and a moved neutral piece.
Analysis
In a game with two perfect players, neither will ever win or lose. The L game is small enough to be completely solvable. There are 2296 different possible valid ways the pieces can be arranged, not counting a rotation or mirror of an arrangement as a new arrangement, and considering the two neutral pieces to be identical. Any arrangement can be reached during the game, with it being any player's turn. Each player has lost in 15 of these arrangements, if it is that player's turn. The losing arrangements involve the losing player's L piece touching a corner. Each player will also soon lose to a perfect player in an additional 14 arrangements. A player will be able to at least force a draw (by playing forever without losing) from the remaining 2267 positions.
Even if neither player plays perfectly, defensive play can continue indefinitely if the players are too cautious to move a neutral piece to the killer positions. If both players are at this level, a sudden-death variant of the rules permits one to move both neutral pieces after moving. A player who can look three moves ahead can defeat defensive play using the standard rules.
See also
Tetromino
Reviews
Games & Puzzles #30
References
Other sources
External links
L game on Edward de Bono's official site (archived)
Interactive web-based L game written in JavaScript
Board games introduced in 1968
Abstract strategy games
Mathematical games
Solved games | L game | Mathematics | 684 |
8,748 | https://en.wikipedia.org/wiki/N%2CN-Dimethyltryptamine | N,N-Dimethyltryptamine (DMT or N,N-DMT) is a substituted tryptamine that occurs in many plants and animals, including humans, and which is both a derivative and a structural analog of tryptamine. DMT is used as a psychedelic drug and prepared by various cultures for ritual purposes as an entheogen.
DMT has a rapid onset, intense effects, and a relatively short duration of action. For those reasons, DMT was known as the "businessman's trip" during the 1960s in the United States, as a user could access the full depth of a psychedelic experience in considerably less time than with other substances such as LSD or psilocybin mushrooms. DMT can be inhaled, ingested, or injected and its effects depend on the dose, as well as the mode of administration. When inhaled or injected, the effects last about five to fifteen minutes. Effects can last three hours or more when orally ingested along with a monoamine oxidase inhibitor (MAOI), such as the ayahuasca brew of many native Amazonian tribes. DMT can produce vivid "projections" of mystical experiences involving euphoria and dynamic pseudohallucinations of geometric forms.
DMT is a functional analog and structural analog of other psychedelic tryptamines such as O-acetylpsilocin (4-AcO-DMT), psilocybin (4-PO-DMT), psilocin (4-HO-DMT), NB-DMT, O-methylbufotenin (5-MeO-DMT), and bufotenin (5-HO-DMT). Parts of the structure of DMT occur within some important biomolecules like serotonin and melatonin, making them structural analogs of DMT.
Human consumption
DMT is produced in many species of plants often in conjunction with its close chemical relatives 5-methoxy-N,N-dimethyltryptamine (5-MeO-DMT) and bufotenin (5-OH-DMT). DMT-containing plants are commonly used in indigenous Amazonian shamanic practices. It is usually one of the main active constituents of the drink ayahuasca; however, ayahuasca is sometimes brewed with plants that do not produce DMT. It occurs as the primary psychoactive alkaloid in several plants including Mimosa tenuiflora, Diplopterys cabrerana, and Psychotria viridis. DMT is found as a minor alkaloid in snuff made from Virola bark resin in which 5-MeO-DMT is the main active alkaloid. DMT is also found as a minor alkaloid in bark, pods, and beans of Anadenanthera peregrina and Anadenanthera colubrina used to make Yopo and Vilca snuff, in which bufotenin is the main active alkaloid. Psilocin and psilocybin, the main psychoactive compounds in psilocybin mushrooms, are structurally similar to DMT.
The psychotropic effects of DMT were first studied scientifically by the Hungarian chemist and psychologist Stephen Szára, who performed research with volunteers in the mid-1950s. Szára, who later worked for the United States National Institutes of Health, had turned his attention to DMT after his order for LSD from the Swiss company Sandoz Laboratories was rejected on the grounds that the powerful psychotropic could be dangerous in the hands of a communist country.
DMT is generally not active orally unless it is combined with a monoamine oxidase inhibitor such as a reversible inhibitor of monoamine oxidase A (RIMA), for example, harmaline. Without a MAOI, the body quickly metabolizes orally administered DMT, and it therefore has no hallucinogenic effect unless the dose exceeds the body's monoamine oxidase's metabolic capacity. Other means of consumption such as vaporizing, injecting, or insufflating the drug can produce powerful hallucinations for a short time (usually less than half an hour), as the DMT reaches the brain before it can be metabolized by the body's natural monoamine oxidase. Taking an MAOI prior to vaporizing or injecting DMT prolongs and enhances the effects.
Clinical use research
Existing research on clinical use of DMT mostly focuses on its effects when exogenously administered as a drug. Although the scientific consensus is that DMT is a naturally occurring molecule in humans, the effects of endogenous DMT in humans (and more broadly in mammals) is still not well understood.
Dimethyltryptamine (DMT), an endogenous ligand of sigma-1 receptors (Sig-1Rs), acts against systemic hypoxia. Research demonstrates DMT reduces the number of apoptotic and ferroptotic cells in mammalian forebrain and supports astrocyte survival in an ischemic environment. According to these data, DMT may be considered as adjuvant pharmacological therapy in the management of acute cerebral ischemia.
DMT is studied as a potential treatment for Parkinson's disease in a Phase 1/2 clinical trial.
SPL026 (DMT fumarate) is currently undergoing phase II clinical trials investigating its use alongside supportive psychotherapy as a potential treatment for major depressive disorder. Additionally, a safety study is underway to investigate the effects of combining SSRIs with SPL026.
Neuropharmacology
Recently, researchers discovered that N,N-dimethyltryptamine is a potent psychoplastogen, a compound capable of promoting rapid and sustained neuroplasticity that may have wide-ranging therapeutic benefit.
Quantities of dimethyltryptamine and O-methylbufotenin were found present in the cerebrospinal fluid of humans in a psychiatric study.
Effects
Subjective psychedelic experiences
Subjective experiences of DMT includes profound time-dilatory, visual, auditory, tactile, and proprioceptive distortions and hallucinations, and other experiences that, by most firsthand accounts, defy verbal or visual description. Examples include perceiving hyperbolic geometry or seeing Escher-like impossible objects.
Several scientific experimental studies have tried to measure subjective experiences of altered states of consciousness induced by drugs under highly controlled and safe conditions.
Rick Strassman and his colleagues conducted a five-year-long DMT study at the University of New Mexico in the 1990s. The results provided insight about the quality of subjective psychedelic experiences. In this study participants received the DMT dosage via intravenous injection and the findings suggested that different psychedelic experiences can occur, depending on the level of dosage. Lower doses (0.01 and 0.05 mg/kg) produced some aesthetic and emotional responses, but not hallucinogenic experiences (e.g., 0.05 mg/kg had mild mood elevating and calming properties). In contrast, responses produced by higher doses (0.2 and 0.4 mg/kg) researchers labeled as "hallucinogenic" that elicited "intensely colored, rapidly moving display of visual images, formed, abstract or both". Comparing to other sensory modalities, the most affected was the visual. Participants reported visual hallucinations, fewer auditory hallucinations and specific physical sensations progressing to a sense of bodily dissociation, as well as experiences of euphoria, calm, fear, and anxiety. These dose-dependent effects match well with anonymously posted "trip reports" online, where users report "breakthroughs" above certain doses.
Strassman also highlighted the importance of the context where the drug has been taken. He claimed that DMT has no beneficial effects of itself, rather the context when and where people take it plays an important role.
It appears that DMT can induce a state or feeling wherein the person believes to "communicate with other intelligent lifeforms" (see "machine elves"). High doses of DMT produce a state that involves a sense of "another intelligence" that people sometimes describe as "super-intelligent", but "emotionally detached".
A 1995 study by Adolf Dittrich and Daniel Lamparter found that the DMT-induced altered state of consciousness (ASC) is strongly influenced by habitual rather than situative factors. In the study, researchers used three dimensions of the APZ questionnaire to examine ASC. The first dimension, oceanic boundlessness (OB), refers to dissolution of ego boundaries and is mostly associated with positive emotions. The second dimension, anxious ego-dissolution (AED), represents a disordering of thoughts and decreases in autonomy and self-control. Last, visionary restructuralization (VR) refers to auditory/visual illusions and hallucinations. Results showed strong effects within the first and third dimensions for all conditions, especially with DMT, and suggested strong intrastability of elicited reactions independently of the condition for the OB and VR scales.
Reported encounters with external entities
Entities perceived during DMT inebriation have been represented in diverse forms of psychedelic art. The term machine elf was coined by ethnobotanist Terence McKenna for the entities he encountered in DMT "hyperspace", also using terms like fractal elves, or self-transforming machine elves. McKenna first encountered the "machine elves" after smoking DMT in Berkeley in 1965. His subsequent speculations regarding the hyperdimensional space in which they were encountered have inspired a great many artists and musicians, and the meaning of DMT entities has been a subject of considerable debate among participants in a networked cultural underground, enthused by McKenna's effusive accounts of DMT hyperspace. Cliff Pickover has also written about the "machine elf" experience, in the book Sex, Drugs, Einstein, & Elves. Strassman noted similarities between self-reports of his DMT study participants' encounters with these "entities", and mythological descriptions of figures such as Ḥayyot haq-Qodesh in ancient religions, including both angels and demons. Strassman also argues for a similarity in his study participants' descriptions of mechanized wheels, gears and machinery in these encounters, with those described in visions of encounters with the Living Creatures and Ophanim of the Hebrew Bible, noting they may stem from a common neuropsychopharmacological experience.
Strassman argues that the more positive of the "external entities" encountered in DMT experiences should be understood as analogous to certain forms of angels:
Strassman's experimental participants also note that some other entities can subjectively resemble creatures more like insects and aliens. As a result, Strassman writes these experiences among his experimental participants "also left me feeling confused and concerned about where the spirit molecule was leading us. It was at this point that I began to wonder if I was getting in over my head with this research."
Hallucinations of strange creatures had been reported by Stephen Szara in a 1958 study in psychotic patients, in which he described how one of his subjects under the influence of DMT had experienced "strange creatures, dwarves or something" at the beginning of a DMT trip.
Other researchers of the entities seemingly encountered by DMT users describe them as "entities" or "beings" in humanoid as well as animal form, with descriptions of "little people" being common (non-human gnomes, elves, imps, etc.). Strassman and others have speculated that this form of hallucination may be the cause of alien abduction and extraterrestrial encounter experiences, which may occur through endogenously-occurring DMT.
Likening them to descriptions of rattling and chattering auditory phenomena described in encounters with the Hayyoth in the Book of Ezekiel, Rick Strassman notes that participants in his studies, when reporting encounters with the alleged entities, have also described loud auditory hallucinations, such as one subject reporting typically "the elves laughing or talking at high volume, chattering, twittering".
Near-death experience
A 2018 study found significant relationships between a DMT experience and a near-death experience (NDE). A 2019 large-scale study pointed that ketamine, Salvia divinorum, and DMT (and other classical psychedelic substances) may be linked to near-death experiences due to the semantic similarity of reports associated with the use of psychoactive compounds and NDE narratives, but the study concluded that with the current data it is neither possible to corroborate nor refute the hypothesis that the release of an endogenous ketamine-like neuroprotective agent underlies NDE phenomenology.
Physiological response
According to a dose-response study in human subjects, dimethyltryptamine administered intravenously slightly elevated blood pressure, heart rate, pupil diameter, and rectal temperature, in addition to elevating blood concentrations of beta-endorphin, corticotropin, cortisol, and prolactin; growth hormone blood levels rise equally in response to all doses of DMT, and melatonin levels were unaffected."
Conjecture regarding endogenous production and effects
In the 1950s, the endogenous production of psychoactive agents was considered to be a potential explanation for the hallucinatory symptoms of some psychiatric diseases; this is known as the transmethylation hypothesis. Several speculative and yet untested hypotheses suggest that endogenous DMT is produced in the human brain and is involved in certain psychological and neurological states. DMT is naturally occurring in small amounts in rat brains, human cerebrospinal fluid, and other tissues of humans and other mammals. Further, mRNA for the enzyme necessary for the production of DMT, INMT, are expressed in the human cerebral cortex, choroid plexus, and pineal gland, suggesting an endogenous role in the human brain. In 2011, Nicholas Cozzi of the University of Wisconsin School of Medicine and Public Health, and three other researchers, concluded that INMT, an enzyme that is associated with the biosynthesis of DMT and endogenous hallucinogens is present in the non-human primate (rhesus macaque) pineal gland, retinal ganglion neurons, and spinal cord. Neurobiologist Andrew Gallimore (2013) suggested that while DMT might not have a modern neural function, it may have been an ancestral neuromodulator once secreted in psychedelic concentrations during REM sleep, a function now lost.
Adverse effects
Acute adverse psychological reaction
DMT may trigger psychological reactions, known colloquially as a "bad trip", such as intense fear, paranoia, anxiety, panic attacks, and substance-induced psychosis, particularly in predisposed individuals.
Addiction and dependence liability
DMT, like other serotonergic psychedelics, is considered to be non-addictive with low abuse potential. A study examining substance use disorder for DSM-IV reported that almost no hallucinogens produced dependence, unlike psychoactive drugs of other classes such as stimulants and depressants. At present, there have been no studies that report drug withdrawal syndrome with termination of DMT, and dependence potential of DMT and the risk of sustained psychological disturbance may be minimal when used infrequently; however, the physiological dependence potential of DMT and ayahuasca has not yet been documented convincingly.
Tolerance
Unlike other classical psychedelics, tolerance does not seem to develop to the subjective effects of DMT. Studies report that DMT did not exhibit tolerance upon repeated administration of twice a day sessions, separated by 5hours, for 5consecutive days; field reports suggests a refractory period of only 15 to 30minutes, while the plasma levels of DMT was nearly undetectable 30minutes after intravenous administration. Another study of four closely spaced DMT infusion sessions with 30minute intervals also suggests no tolerance buildup to the psychological effects of the compound, while heart rate responses and neuroendocrine effects were diminished with repeated administration. Similarly to DMT by itself, tolerance does not appear to develop to ayahuasca. A fully hallucinogenic dose of DMT did not demonstrate cross-tolerance to human subjects who are highly tolerant to LSD; researches suggest that DMT exhibits unique pharmacological properties compared to other classical psychedelics.
Long-term use
There have been no serious adverse effects reported on long-term use of DMT, apart from acute cardiovascular events. Repeated and one-time administration of DMT produces marked changes in the cardiovascular system, with an increase in systolic and diastolic blood pressure; although the changes were not statistically significant, a robust trend towards significance was observed for systolic blood pressure at high doses.
Drug-interactions
DMT is inactive when ingested orally due to metabolism by MAO, and DMT-containing drinks such as ayahuasca have been found to contain MAOIs, in particular, harmine and harmaline. Life-threatening lethalities such as serotonin syndrome (SS) may occur when MAOIs are combined with certain serotonergic medications such as SSRI antidepressants. Serotonin syndrome has also been reported with tricyclic antidepressants, opiates, analgesic, and antimigraine drugs; it is advised to exercise caution when an individual had used dextromethorphan (DXM), MDMA, ginseng, or St. John's wort recently.
Chronic use of SSRIs, TCAs, and MAOIs diminish subjective effects of psychedelics due to presumed SSRI-induced 5-HT2A receptors downregulation and MAOI-induced 5-HT2A receptor desensitization. The interaction between psychedelics and antipsychotics and anticonvulsant are not well documented, however reports reveal that co-use of psychedelics with mood stabilizers such as lithium may provoke seizure and dissociative effects in individuals with bipolar disorder.
Routes of administration
Inhalation
A standard dose for vaporized DMT is 20–60 milligrams, depending highly on the efficiency of vaporization as well as body weight and personal variation. In general, this is inhaled in a few successive breaths, but lower doses can be used if the user can inhale it in fewer breaths (ideally one). The effects last for a short period of time, usually 5 to 15 minutes, dependent on the dose. The onset after inhalation is very fast (less than 45 seconds) and peak effects are reached within a minute. In the 1960s, DMT was known as a "businessman's trip" in the US because of the relatively short duration (and rapid onset) of action when inhaled. DMT can be inhaled using a bong, typically when sandwiched between layers of plant matter, using a specially designed pipe, or by using an e-cigarette once it has been dissolved in propylene glycol and/or vegetable glycerin. Some users have also started using vaporizers meant for cannabis extracts ("wax pens") for ease of temperature control when vaporizing crystals. A DMT-infused smoking blend is called Changa, and is typically used in pipes or other utensils meant for smoking dried plant matter.
Intravenous injection
In a study conducted from 1990 through 1995, University of New Mexico psychiatrist Rick Strassman found that some volunteers injected with high doses of DMT reported experiences with perceived alien entities. Usually, the reported entities were experienced as the inhabitants of a perceived independent reality that the subjects reported visiting while under the influence of DMT.
In 2023, a study investigated a novel method of DMT administration involving a bolus injection paired with a constant-rate infusion, with the goal of extending the DMT experience.
Oral
DMT is broken down by the enzyme monoamine oxidase through a process called deamination, and is quickly inactivated orally unless combined with a monoamine oxidase inhibitor (MAOI). The traditional South American beverage ayahuasca is derived by boiling Banisteriopsis caapi with leaves of one or more plants containing DMT, such as Psychotria viridis, Psychotria carthagenensis, or Diplopterys cabrerana. The Banisteriopsis caapi contains harmala alkaloids, a highly active reversible inhibitor of monoamine oxidase A (RIMAs), rendering the DMT orally active by protecting it from deamination. A variety of different recipes are used to make the brew depending on the purpose of the ayahuasca session, or local availability of ingredients. Two common sources of DMT in the western US are reed canary grass (Phalaris arundinacea) and Harding grass (Phalaris aquatica). These invasive grasses contain low levels of DMT and other alkaloids but also contain gramine, which is toxic and difficult to separate. In addition, Jurema (Mimosa tenuiflora) shows evidence of DMT content: the pink layer in the inner rootbark of this small tree contains a high concentration of N,N-DMT.
Taken orally with an RIMA, DMT produces a long-lasting (over three hours), slow, deep metaphysical experience similar to that of psilocybin mushrooms, but more intense.
The intensity of orally administered DMT depends on the type and dose of MAOI administered alongside it. When ingested with 120 mg of harmine (a RIMA and member of the harmala alkaloids), 20 mg of DMT was reported to have psychoactive effects by author and ethnobotanist Jonathan Ott. Ott reported that to produce a visionary state, the threshold oral dose was 30 mg DMT alongside 120 mg harmine. This is not necessarily indicative of a standard dose, as dose-dependent effects may vary due to individual variations in drug metabolism.
History
Naturally occurring substances (of both vegetable and animal origin) containing DMT have been used in South America since pre-Columbian times.
DMT was first synthesized in 1931 by Canadian chemist Richard Helmuth Fredrick Manske. In general, its discovery as a natural product is credited to Brazilian chemist and microbiologist Oswaldo Gonçalves de Lima, who isolated an alkaloid he named nigerina (nigerine) from the root bark of Mimosa tenuiflora in 1946. However, in a careful review of the case Jonathan Ott shows that the empirical formula for nigerine determined by Gonçalves de Lima, which notably contains an atom of oxygen, can match only a partial, "impure" or "contaminated" form of DMT. It was only in 1959, when Gonçalves de Lima provided American chemists a sample of Mimosa tenuiflora roots, that DMT was unequivocally identified in this plant material. Less ambiguous is the case of isolation and formal identification of DMT in 1955 in seeds and pods of Anadenanthera peregrina by a team of American chemists led by Evan Horning (1916–1993). Since 1955, DMT has been found in a number of organisms: in at least fifty plant species belonging to ten families, and in at least four animal species, including one gorgonian and three mammalian species (including humans).
In terms of a scientific understanding, the hallucinogenic properties of DMT were not uncovered until 1956 by Hungarian chemist and psychiatrist Stephen Szara. In his paper Dimethyltryptamin: Its Metabolism in Man; the Relation of its Psychotic Effect to the Serotonin Metabolism, Szara employed synthetic DMT, synthesized by the method of Speeter and Anthony, which was then administered to 20 volunteers by intramuscular injection. Urine samples were collected from these volunteers for the identification of DMT metabolites. This is considered to be the converging link between the chemical structure DMT to its cultural consumption as a psychoactive and religious sacrament.
Another historical milestone is the discovery of DMT in plants frequently used by Amazonian natives as additive to the vine Banisteriopsis caapi to make ayahuasca decoctions. In 1957, American chemists Francis Hochstein and Anita Paradies identified DMT in an "aqueous extract" of leaves of a plant they named Prestonia amazonicum [sic] and described as "commonly mixed" with B. caapi. The lack of a proper botanical identification of Prestonia amazonica in this study led American ethnobotanist Richard Evans Schultes (1915–2001) and other scientists to raise serious doubts about the claimed plant identity. The mistake likely led the writer William Burroughs to regard the DMT he experimented with in Tangier in 1961 as "Prestonia". Better evidence was produced in 1965 by French pharmacologist Jacques Poisson, who isolated DMT as a sole alkaloid from leaves, provided and used by Aguaruna Indians, identified as having come from the vine Diplopterys cabrerana (then known as Banisteriopsis rusbyana). Published in 1970, the first identification of DMT in the plant Psychotria viridis, another common additive of ayahuasca, was made by a team of American researchers led by pharmacologist Ara der Marderosian. Not only did they detect DMT in leaves of P. viridis obtained from Kaxinawá indigenous people, but they also were the first to identify it in a sample of an ayahuasca decoction, prepared by the same indigenous people.
Chemistry
Appearance and form
DMT is commonly handled and stored as a hemifumarate, as other DMT acid salts are extremely hygroscopic and will not readily crystallize. Its freebase form, although less stable than DMT hemifumarate, is favored by recreational users choosing to vaporize the chemical as it has a lower boiling point.
DMT is a lipophilic compound, with an experimental log P of 2.57.
Synthesis
Biosynthesis
Dimethyltryptamine is an indole alkaloid derived from the shikimate pathway. Its biosynthesis is relatively simple and summarized in the adjacent picture. In plants, the parent amino acid -tryptophan is produced endogenously where in animals -tryptophan is an essential amino acid coming from diet. No matter the source of -tryptophan, the biosynthesis begins with its decarboxylation by an aromatic amino acid decarboxylase (AADC) enzyme (step 1). The resulting decarboxylated tryptophan analog is tryptamine. Tryptamine then undergoes a transmethylation (step 2): the enzyme indolethylamine-N-methyltransferase (INMT) catalyzes the transfer of a methyl group from cofactor S-adenosylmethionine (SAM), via nucleophilic attack, to tryptamine. This reaction transforms SAM into S-adenosylhomocysteine (SAH), and gives the intermediate product N-methyltryptamine (NMT). NMT is in turn transmethylated by the same process (step 3) to form the end product N,N-dimethyltryptamine. Tryptamine transmethylation is regulated by two products of the reaction: SAH, and DMT were shown ex vivo to be among the most potent inhibitors of rabbit INMT activity.
This transmethylation mechanism has been repeatedly and consistently proven by radiolabeling of SAM methyl group with carbon-14 ((14C-CH3)SAM).
Laboratory synthesis
DMT can be synthesized through several possible pathways from different starting materials. The two most commonly encountered synthetic routes are through the reaction of indole with oxalyl chloride followed by reaction with dimethylamine and reduction of the carbonyl functionalities with lithium aluminium hydride to form DMT. The second commonly encountered route is through the N,N-dimethylation of tryptamine using formaldehyde followed by reduction with sodium cyanoborohydride or sodium triacetoxyborohydride. Sodium borohydride can be used but requires a larger excess of reagents and lower temperatures due to it having a higher selectivity for carbonyl groups as opposed to imines. Procedures using sodium cyanoborohydride and sodium triacetoxyborohydride (presumably created in situ from cyanoborohydride though this may not be the case due to the presence of water or methanol) also result in the creation of cyanated tryptamine and beta-carboline byproducts of unknown toxicity while using sodium borohydride in absence of acid does not. Bufotenine, a plant extract, can also be synthesized into DMT.
Alternatively, an excess of methyl iodide or methyl p-toluenesulfonate and sodium carbonate can be used to over-methylate tryptamine, resulting in the creation of a quaternary ammonium salt, which is then dequaternized (demethylated) in ethanolamine to yield DMT. The same two-step procedure is used to synthesize other N,N-dimethylated compounds, such as 5-MeO-DMT.
Clandestine manufacture
In a clandestine setting, DMT is not typically synthesized due to the lack of availability of the starting materials, namely tryptamine and oxalyl chloride. Instead, it is more often extracted from plant sources using a nonpolar hydrocarbon solvent such as naphtha or heptane, and a base such as sodium hydroxide.
Alternatively, an acid–base extraction is sometimes used instead.
A variety of plants contain DMT at sufficient levels for being viable sources, but specific plants such as Mimosa tenuiflora, Acacia acuminata and Acacia confusa are most often used.
The chemicals involved in the extraction are commonly available. The plant material may be illegal to procure in some countries. The end product (DMT) is illegal in most countries.
Evidence in mammals
Published in Science in 1961, Julius Axelrod found an N-methyltransferase enzyme capable of mediating biotransformation of tryptamine into DMT in a rabbit's lung. This finding initiated a still ongoing scientific interest in endogenous DMT production in humans and other mammals. From then on, two major complementary lines of evidence have been investigated: localization and further characterization of the N-methyltransferase enzyme, and analytical studies looking for endogenously produced DMT in body fluids and tissues.
In 2013, researchers reported DMT in the pineal gland microdialysate of rodents.
A study published in 2014 reported the biosynthesis of N,N-dimethyltryptamine (DMT) in the human melanoma cell line SK-Mel-147 including details on its metabolism by peroxidases.
It is assumed that more than half of the amount of DMT produced by the acidophilic cells of the pineal gland is secreted before and during death, the amount being 2.5–3.4 mg/kg. However, this claim by Strassman has been criticized by David Nichols who notes that DMT does not appear to be produced in any meaningful amount by the pineal gland. Removal or calcification of the pineal gland does not induce any of the symptoms caused by removal of DMT. The symptoms presented are consistent solely with reduction in melatonin, which is the pineal gland's known function. Nichols instead suggests that dynorphin and other endorphins are responsible for the reported euphoria experienced by patients during a near-death experience.
In 2014, researchers demonstrated the immunomodulatory potential of DMT and 5-MeO-DMT through the Sigma-1 receptor of human immune cells. This immunomodulatory activity may contribute to significant anti-inflammatory effects and tissue regeneration.
Endogenous DMT
N,N-Dimethyltryptamine (DMT), a psychedelic compound identified endogenously in mammals, is biosynthesized by aromatic -amino acid decarboxylase (AADC) and indolethylamine-N-methyltransferase (INMT). Studies have investigated brain expression of INMT transcript in rats and humans, coexpression of INMT and AADC mRNA in rat brain and periphery, and brain concentrations of DMT in rats. INMT transcripts were identified in the cerebral cortex, pineal gland, and choroid plexus of both rats and humans via in situ hybridization. Notably, INMT mRNA was colocalized with AADC transcript in rat brain tissues, in contrast to rat peripheral tissues where there existed little overlapping expression of INMT with AADC transcripts. Additionally, extracellular concentrations of DMT in the cerebral cortex of normal behaving rats, with or without the pineal gland, were similar to those of canonical monoamine neurotransmitters including serotonin. A significant increase of DMT levels in the rat visual cortex was observed following induction of experimental cardiac arrest, a finding independent of an intact pineal gland. These results show for the first time that the rat brain is capable of synthesizing and releasing DMT at concentrations comparable to known monoamine neurotransmitters and raise the possibility that this phenomenon may occur similarly in human brains.
The first claimed detection of endogenous DMT in mammals was published in June 1965: German researchers F. Franzen and H. Gross report to have evidenced and quantified DMT, along with its structural analog bufotenin (5-HO-DMT), in human blood and urine. In an article published four months later, the method used in their study was strongly criticized, and the credibility of their results challenged.
Few of the analytical methods used prior to 2001 to measure levels of endogenously formed DMT had enough sensitivity and selectivity to produce reliable results. Gas chromatography, preferably coupled to mass spectrometry (GC-MS), is considered a minimum requirement. A study published in 2005 implements the most sensitive and selective method ever used to measure endogenous DMT: liquid chromatography–tandem mass spectrometry with electrospray ionization (LC-ESI-MS/MS) allows for reaching limits of detection (LODs) 12 to 200 fold lower than those attained by the best methods employed in the 1970s. The data summarized in the table below are from studies conforming to the abovementioned requirements (abbreviations used: CSF = cerebrospinal fluid; LOD = limit of detection; n = number of samples; ng/L and ng/kg = nanograms (10−9 g) per litre, and nanograms per kilogram, respectively):
A 2013 study found DMT in microdialysate obtained from a rat's pineal gland, providing evidence of endogenous DMT in the mammalian brain. In 2019 experiments showed that the rat brain is capable of synthesizing and releasing DMT. These results raise the possibility that this phenomenon may occur similarly in human brains.
Detection in human body fluids
DMT may be measured in blood, plasma or urine using chromatographic techniques as a diagnostic tool in clinical poisoning situations or to aid in the medicolegal investigation of suspicious deaths. In general, blood or plasma DMT levels in recreational users of the drug are in the 10–30 μg/L range during the first several hours post-ingestion. Less than 0.1% of an oral dose is eliminated unchanged in the 24-hour urine of humans.
INMT
Before techniques of molecular biology were used to localize indolethylamine N-methyltransferase (INMT), characterization and localization went on a par: samples of the biological material where INMT is hypothesized to be active are subject to enzyme assay. Those enzyme assays are performed either with a radiolabeled methyl donor like (14C-CH3)SAM to which known amounts of unlabeled substrates like tryptamine are added or with addition of a radiolabeled substrate like (14C)NMT to demonstrate in vivo formation. As qualitative determination of the radioactively tagged product of the enzymatic reaction is sufficient to characterize INMT existence and activity (or lack of), analytical methods used in INMT assays are not required to be as sensitive as those needed to directly detect and quantify the minute amounts of endogenously formed DMT. The essentially qualitative method thin layer chromatography (TLC) was thus used in a vast majority of studies. Also, robust evidence that INMT can catalyze transmethylation of tryptamine into NMT and DMT could be provided with reverse isotope dilution analysis coupled to mass spectrometry for rabbit and human lung during the early 1970s.
Selectivity rather than sensitivity proved to be a challenge for some TLC methods with the discovery in 1974–1975 that incubating rat blood cells or brain tissue with (14C-CH3)SAM and NMT as substrate mostly yields tetrahydro-β-carboline derivatives, and negligible amounts of DMT in brain tissue. It is indeed simultaneously realized that the TLC methods used thus far in almost all published studies on INMT and DMT biosynthesis are incapable to resolve DMT from those tetrahydro-β-carbolines. These findings are a blow for all previous claims of evidence of INMT activity and DMT biosynthesis in avian and mammalian brain, including in vivo, as they all relied upon use of the problematic TLC methods: their validity is doubted in replication studies that make use of improved TLC methods, and fail to evidence DMT-producing INMT activity in rat and human brain tissues. Published in 1978, the last study attempting to evidence in vivo INMT activity and DMT production in brain (rat) with TLC methods finds biotransformation of radiolabeled tryptamine into DMT to be real but "insignificant". Capability of the method used in this latter study to resolve DMT from tetrahydro-β-carbolines is questioned later.
To localize INMT, a qualitative leap is accomplished with use of modern techniques of molecular biology, and of immunohistochemistry. In humans, a gene encoding INMT is determined to be located on chromosome 7. Northern blot analyses reveal INMT messenger RNA (mRNA) to be highly expressed in rabbit lung, and in human thyroid, adrenal gland, and lung. Intermediate levels of expression are found in human heart, skeletal muscle, trachea, stomach, small intestine, pancreas, testis, prostate, placenta, lymph node, and spinal cord. Low to very low levels of expression are noted in rabbit brain, and human thymus, liver, spleen, kidney, colon, ovary, and bone marrow. INMT mRNA expression is absent in human peripheral blood leukocytes, whole brain, and in tissue from seven specific brain regions (thalamus, subthalamic nucleus, caudate nucleus, hippocampus, amygdala, substantia nigra, and corpus callosum). Immunohistochemistry showed INMT to be present in large amounts in glandular epithelial cells of small and large intestines. In 2011, immunohistochemistry revealed the presence of INMT in primate nervous tissue including retina, spinal cord motor neurons, and pineal gland. A 2020 study using in-situ hybridization, a far more accurate tool than the northern blot analysis, found mRNA coding for INMT expressed in the human cerebral cortex, choroid plexus, and pineal gland.
Pharmacology
Pharmacodynamics
DMT binds non-selectively with affinities below 0.6 μmol/L to the following serotonin receptors: 5-HT1A, 5-HT1B, 5-HT1D, 5-HT2A, 5-HT2B, 5-HT2C, 5-HT6, and 5-HT7. An agonist action has been determined at 5-HT1A, 5-HT2A and 5-HT2C. Its efficacies at other serotonin receptors remain to be determined. Of special interest will be the determination of its efficacy at human 5-HT2B receptor as two in vitro assays evidenced DMT's high affinity for this receptor: 0.108 μmol/L and 0.184 μmol/L. This may be of importance because chronic or frequent uses of serotonergic drugs showing preferential high affinity and clear agonism at 5-HT2B receptor have been causally linked to valvular heart disease.
It has also been shown to possess affinity for the dopamine D1, α1-adrenergic, α2-adrenergic, imidazoline-1, and σ1 receptors. Converging lines of evidence established activation of the σ1 receptor at concentrations of 50–100 μmol/L. Its efficacies at the other receptor binding sites are unclear. It has also been shown in vitro to be a substrate for the cell-surface serotonin transporter (SERT) expressed in human platelets, and the rat vesicular monoamine transporter 2 (VMAT2), which was transiently expressed in fall armyworm Sf9 cells. DMT inhibited SERT-mediated serotonin uptake into platelets at an average concentration of 4.00 ± 0.70 μmol/L and VMAT2-mediated serotonin uptake at an average concentration of 93 ± 6.8 μmol/L. In addition, DMT is a potent serotonin releasing agent with an value of 114nM.
As with other so-called "classical hallucinogens", a large part of DMT psychedelic effects can be attributed to a functionally selective activation of the 5-HT2A receptor. DMT concentrations eliciting 50% of its maximal effect (half maximal effective concentration = EC50) at the human 5-HT2A receptor in vitro are in the 0.118–0.983 μmol/L range. This range of values coincides well with the range of concentrations measured in blood and plasma after administration of a fully psychedelic dose (see Pharmacokinetics).
DMT is one of the only psychedelics that isn't known to produce tolerance to its hallucinogenic effects. The lack of tolerance with DMT may be related to the fact that, unlike other psychedelics such as LSD and DOI, DMT does not desensitize serotonin 5-HT2A receptors in vitro. This may be due to the fact that DMT is a biased agonist of the serotonin 5-HT2A receptor. More specifically, DMT activates the Gq signaling pathway of the serotonin 5-HT2A receptor without significantly recruiting β-arrestin2. Activation of β-arrestin2 is linked to receptor downregulation and tachyphylaxis. Similarly to DMT, 5-MeO-DMT is a biased agonist of the serotonin 5-HT2A receptor, with minimal β-arrestin2 recruitment, and likewise has been associated with little tolerance to its hallucinogenic effects.
As DMT has been shown to have slightly better efficacy (EC50) at human serotonin 2C receptor than at the 2A receptor, 5-HT2C is also likely implicated in DMT's overall effects. Other receptors such as 5-HT1A and σ1 may also play a role.
In 2009, it was hypothesized that DMT may be an endogenous ligand for the σ1 receptor. The concentration of DMT needed for σ1 activation in vitro (50–100 μmol/L) is similar to the behaviorally active concentration measured in mouse brain of approximately 106 μmol/L This is minimally 4 orders of magnitude higher than the average concentrations measured in rat brain tissue or human plasma under basal conditions (see Endogenous DMT), so σ1 receptors are likely to be activated only under conditions of high local DMT concentrations. If DMT is stored in synaptic vesicles, such concentrations might occur during vesicular release. To illustrate, while the average concentration of serotonin in brain tissue is in the 1.5–4 μmol/L range, the concentration of serotonin in synaptic vesicles was measured at 270 mM. Following vesicular release, the resulting concentration of serotonin in the synaptic cleft, to which serotonin receptors are exposed, is estimated to be about 300 μmol/L. Thus, while in vitro receptor binding affinities, efficacies, and average concentrations in tissue or plasma are useful, they are not likely to predict DMT concentrations in the vesicles or at synaptic or intracellular receptors. Under these conditions, notions of receptor selectivity are moot, and it seems probable that most of the receptors identified as targets for DMT (see above) participate in producing its psychedelic effects.
In September 2020, an in vitro and in vivo study found that DMT present in the ayahuasca infusion promotes neurogenesis, meaning it helps with generating neurons.
Pharmacokinetics
DMT peak level concentrations (Cmax) measured in whole blood after intramuscular (IM) injection (0.7 mg/kg, n = 11) and in plasma following intravenous (IV) administration (0.4 mg/kg, n = 10) of fully psychedelic doses are in the range of around 14 to 154 μg/L and 32 to 204 μg/L, respectively.
The corresponding molar concentrations of DMT are therefore in the range of 0.074–0.818 μmol/L in whole blood and 0.170–1.08 μmol in plasma. However, several studies have described active transport and accumulation of DMT into rat and dog brains following peripheral administration.
Similar active transport, and accumulation processes likely occur in human brains and may concentrate DMT in brain by several-fold or more (relatively to blood), resulting in local concentrations in the micromolar or higher range. Such concentrations would be commensurate with serotonin brain tissue concentrations, which have been consistently determined to be in the 1.5–4 μmol/L range.
Closely coextending with peak psychedelic effects, mean time to reach peak concentrations (Tmax) was determined to be 10–15 minutes in whole blood after IM injection, and 2 minutes in plasma after IV administration. When taken orally mixed in an ayahuasca decoction, and in freeze-dried ayahuasca gel caps, DMT Tmax is considerably delayed: 107.59 ± 32.5 minutes, and 90–120 minutes, respectively.
The pharmacokinetics for vaporizing DMT have not been studied or reported.
Due to its lipophilicity, DMT easily crosses the blood–brain barrier and enters the central nervous system.
Society and culture
Legal status
International law
Internationally DMT is illegal to possess without authorisation, exemption or license, but ayahuasca and DMT brews and preparations are lawful. DMT is controlled by the Convention on Psychotropic Substances at the international level. The Convention makes it illegal to possess, buy, purchase, sell, to retail and to dispense without a licence.
By continent and country
In some countries, ayahuasca is a forbidden or controlled or regulated substance, while in other countries it is not a controlled substance or its production, consumption, and sale, is allowed to various degrees.
Asia
Israel – DMT is an illegal substance; production, trade and possession are prosecuted as crimes.
India – DMT is illegal to produce, transport, trade in or possess with a minimum prison or jail punishment of ten years.
Europe
France – DMT, along with most of its plant sources, is classified as a stupéfiant (narcotic).
Germany – DMT is prohibited as a class I drug.
Republic of Ireland – DMT is an illegal Schedule 1 drug under the Misuse of Drugs Acts. An attempt in 2014 by a member of the Santo Daime church to gain a religious exemption to import the drug failed.
Latvia — DMT is prohibited as a Schedule I drug.
Netherlands – The drug is banned as it is classified as a List 1 Drug per the Opium Law. Production, trade and possession of DMT are prohibited.
Russia – Classified as a Schedule I narcotic, including its derivatives (see sumatriptan and zolmitriptan).
Serbia – DMT, along with stereoisomers and salts is classified as List 4 (Psychotropic substances) substance according to Act on Control of Psychoactive Substances.
Sweden – DMT is considered a Schedule 1 drug. The Swedish supreme court concluded in 2018 that possession of processed plant material containing a significant amount of DMT is illegal. However, possession of unprocessed such plant material was ruled legal.
United Kingdom – DMT is classified as a Class A drug.
Belgium – DMT cannot be possessed, sold, purchased or imported. Usage is not specifically prohibited, but since usage implies possession one could be prosecuted that way.
North America
Canada – DMT is classified as a Schedule III drug under the Controlled Drugs and Substances Act, but is legal for religious groups to use. In 2017 the Santo Daime Church Céu do Montréal received religious exemption to use ayahuasca as a sacrament in their rituals.
United States – DMT is classified in the United States as a Schedule I drug under the Controlled Substances Act of 1970.
In December 2004, the Supreme Court lifted a stay, thereby allowing the Brazil-based União do Vegetal (UDV) church to use a decoction containing DMT in their Christmas services that year. This decoction is a tea made from boiled leaves and vines, known as hoasca within the UDV, and ayahuasca in different cultures. In Gonzales v. O Centro Espírita Beneficente União do Vegetal, the Supreme Court heard arguments on 1 November 2005, and unanimously ruled in February 2006 that the U.S. federal government must allow the UDV to import and consume the tea for religious ceremonies under the 1993 Religious Freedom Restoration Act.
In September 2008, the three Santo Daime churches filed suit in federal court to gain legal status to import DMT-containing ayahuasca tea. The case, Church of the Holy Light of the Queen v. Mukasey, presided over by U.S. District Judge Owen M. Panner, was ruled in favor of the Santo Daime church. As of 21 March 2009, a federal judge says members of the church in Ashland can import, distribute and brew ayahuasca. Panner issued a permanent injunction barring the government from prohibiting or penalizing the sacramental use of "Daime tea". Panner's order said activities of The Church of the Holy Light of the Queen are legal and protected under freedom of religion. His order prohibits the federal government from interfering with and prosecuting church members who follow a list of regulations set out in his order.
Oceania
New Zealand – DMT is classified as a Class A drug under the Misuse of Drugs Act 1975.
Australia – DMT is listed as a Schedule 9 prohibited substance in Australia under the Poisons Standard (October 2015). A Schedule 9 drug is outlined in the Poisons Act 1964 as "Substances which may be abused or misused, the manufacture, possession, sale or use of which should be prohibited by law except when required for medical or scientific research, or for analytical, teaching or training purposes with approval of the CEO." Between 2011 and 2012, the Australian federal government was considering changes to the Australian Criminal Code that would classify any plants containing any amount of DMT as "controlled plants". DMT itself was already controlled under current laws. The proposed changes included other similar blanket bans for other substances, such as a ban on any and all plants containing mescaline or ephedrine. The proposal was not pursued after political embarrassment on realisation that this would make the official Floral Emblem of Australia, Acacia pycnantha (Golden Wattle), illegal. The Therapeutic Goods Administration and federal authority had considered a motion to ban the same, but this was withdrawn in May 2012 (as DMT may still hold potential entheogenic value to native and/or religious people). Under the Misuse of Drugs Act 1981 6.0 g (3/16 oz) of DMT is considered enough to determine a court of trial and 2.0 g (1/16 oz) is considered intent to sell and supply.
Black market
Electronic cigarette cartridges filled with DMT started to be sold on the black market in 2018.
See also
Dimethyltryptamine-N-oxide
Psychedelic drug
List of psychoactive plants
MPMI
Serotonergic psychedelic
Psychoplastogen
Alexander Shulgin
SN-22
Rick Strassman
References
External links
DMT chapter from TiHKAL
5-HT2A agonists
Ayahuasca
Biased ligands
Dimethylamino compounds
Entheogens
Experimental antidepressants
Experimental anxiolytics
Experimental hallucinogens
Psychedelic tryptamines
Serotonin receptor agonists
Serotonin releasing agents
Sigma agonists
Tryptamine alkaloids | N,N-Dimethyltryptamine | Chemistry | 11,285 |
4,156,794 | https://en.wikipedia.org/wiki/Soil%20compaction | In geotechnical engineering, soil compaction is the process in which stress applied to a soil causes densification as air is displaced from the pores between the soil grains. When stress is applied that causes densification due to water (or other liquid) being displaced from between the soil grains, then consolidation, not compaction, has occurred. Normally, compaction is the result of heavy machinery compressing the soil, but it can also occur due to the passage of, for example, animal feet.
In soil science and agronomy, soil compaction is usually a combination of both engineering compaction and consolidation, so may occur due to a lack of water in the soil, the applied stress being internal suction due to water evaporation as well as due to passage of animal feet. Affected soils become less able to absorb rainfall, thus increasing runoff and erosion. Plants have difficulty in compacted soil because the mineral grains are pressed together, leaving little space for air and water, which are essential for root growth. Burrowing animals also find it a hostile environment, because the denser soil is more difficult to penetrate. The ability of a soil to recover from this type of compaction depends on climate, mineralogy and fauna. Soils with high shrink–swell capacity, such as vertisols, recover quickly from compaction where moisture conditions are variable (dry spells shrink the soil, causing it to crack). But clays such as kaolinite, which do not crack as they dry, cannot recover from compaction on their own unless they host ground-dwelling animals such as earthworms—the Cecil soil series is an example.
Before soils can be compacted in the field, some laboratory tests are required to determine their engineering properties. Among various properties, the maximum dry density and the optimum moisture content are vital and specify the required density to be compacted in the field.
In construction
Soil compaction is a vital part of the construction process. It is used for support of structural entities such as building foundations, roadways, walkways, and earth retaining structures to name a few. For a given soil type certain properties may deem it more or less desirable to perform adequately for a particular circumstance. In general, the preselected soil should have adequate strength, be relatively incompressible so that future settlement is not significant, be stable against volume change as water content or other factors vary, be durable and safe against deterioration, and possess proper permeability.
When an area is to be filled or backfilled the soil is placed in layers called lifts. The ability of the first fill layers to be properly compacted will depend on the condition of the natural material being covered. If unsuitable material is left in place and backfilled, it may compress over a long period under the weight of the earth fill, causing settlement cracks in the fill or in any structure supported by the fill. In order to determine if the natural soil will support the first fill layers, an area can be proofrolled. Proofrolling consists of utilizing a piece of heavy construction equipment to roll across the fill site and watching for deflections to be revealed. These areas will be indicated by the development of rutting, pumping, or ground weaving.
To ensure adequate soil compaction is achieved, project specifications will indicate the required soil density or degree of compaction that must be achieved. These specifications are generally recommended by a geotechnical engineer in a geotechnical engineering report.
The soil type—that is, grain-size distributions, shape of the soil grains, specific gravity of soil solids, and amount and type of clay minerals, present—has a great influence on the maximum dry unit weight and optimum moisture content. It also has a great influence on how the materials should be compacted in given situations. Compaction is accomplished by use of heavy equipment. In sands and gravels, the equipment usually vibrates, to cause re-orientation of the soil particles into a denser configuration. In silts and clays, a sheepsfoot roller is frequently used, to create small zones of intense shearing, which drives air out of the soil.
Determination of adequate compaction is done by determining the in-situ density of the soil and comparing it to the maximum density determined by a laboratory test. The most commonly used laboratory test is called the Proctor compaction test and there are two different methods in obtaining the maximum density. They are the standard Proctor and modified Proctor tests; the modified Proctor is more commonly used. For small dams, the standard Proctor may still be the reference.
While soil under structures and pavements needs to be compacted, it is important after construction to decompact areas to be landscaped so that vegetation can grow.
Compaction methods
There are several means of achieving compaction of a material. Some are more appropriate for soil compaction than others, while some techniques are only suitable for particular soils or soils in particular conditions. Some are more suited to compaction of non-soil materials such as asphalt. Generally, those that can apply significant amounts of shear as well as compressive stress, are most effective.
The available techniques can be classified as:
Static – a large stress is slowly applied to the soil and then released.
Impact – the stress is applied by dropping a large mass onto the surface of the soil.
Vibrating – a stress is applied repeatedly and rapidly via a mechanically driven plate or hammer. Often combined with rolling compaction (see below).
Gyrating – a static stress is applied and maintained in one direction while the soil is a subjected to a gyratory motion about the axis of static loading. Limited to laboratory applications.
Rolling – a heavy cylinder is rolled over the surface of the soil. Commonly used on sports pitches. Roller-compactors are often fitted with vibratory devices to enhance their effectiveness.
Kneading – shear is applied by alternating movement in adjacent positions. An example, combined with rolling compaction, is the 'sheepsfoot' roller used in waste compaction at landfills.
The construction plant available to achieve compaction is extremely varied and is described elsewhere.
Test methods in laboratory
Soil compactors are used to perform test methods which cover laboratory compaction methods used to determine the relationship between molding water content and dry unit weight of soils. Soil placed as engineering fill is compacted to a dense state to obtain satisfactory engineering properties such as, shear strength, compressibility, or permeability. In addition, foundation soils are often compacted to improve their engineering properties. Laboratory compaction tests provide the basis for determining the percent compaction and molding water content needed to achieve the required engineering properties, and for controlling construction to assure that the required compaction and water contents are achieved. Test methods such as EN 13286-2, EN 13286-47, ASTM D698, ASTM D1557, AASHTO T99, AASHTO T180, AASHTO T193, BS 1377:4 provide soil compaction testing procedures.
See also
Soil compaction (agriculture)
Soil degradation
Compactor
Earthwork
Soil structure
Aeration
Shear strength (soil)
References
Soil science
Earthworks (engineering)
Soil degradation | Soil compaction | Environmental_science | 1,464 |
24,135,147 | https://en.wikipedia.org/wiki/Kodecyte | A kodecyte (ko•de•cyte) is a living cell that has been modified (koded) by the incorporation of one or more function-spacer-lipid constructs (FSL constructs) to gain a new or novel biological, chemical or technological function. The cell is modified by the lipid tail of the FSL construct incorporating into the bilipid membrane of the cell.
All kodecytes retain their normal vitality and functionality while gaining the new function of the inserted FSL constructs. The combination of dispersibility in biocompatible media, spontaneous incorporation into cell membranes, and apparent low toxicity, makes FSL constructs suitable as research tools and for the development of new diagnostic and therapeutic applications.
The technology
Kode FSL constructs consist of three components; a functional moiety (F), a spacer (S) and a lipid (L).
Function groups on FSL constructs that can be used to create kodecytes include saccharides (including ABO blood group-related determinants, sialic acids, hyaluronin polysaccharides), fluorophores, biotin, and a range of peptides.
Although kodecytes are created by modifying natural cells, they are different from natural cells. For example, FSL constructs, influenced by the composition of the lipid tail, are laterally mobile in the membrane and some FSL constructs may also cluster due to the characteristics of the functional group (F). As FSL constructs are anchored in the membrane via a lipid tail (L) it is believed they do not participate in signal transduction, but may be designed to act as agonists or antagonists of the initial binding event. FSL constructs will not actively pass through the plasma membrane but may enter the cell via membrane invagination and endocytosis.
The "koding" of cells is stable (subject to the rate of turnover of the membrane components). FSL constructs will remain in the membrane of inactive cells (e.g. red blood cells) for the life of the cell provided it is stored in lipid free media. In the peripheral circulation FSL constructs are observed to be lost from red cell kodecytes at a rate of about 1% per hour. The initial "koding" dose and the minimum level required for detection determine how long the presence of "kodecytes" in the circulation can be monitored. For red blood "kodecytes" reliable monitoring of the presence of the "kodecytes" for up to 3 days post intravenous administration has been demonstrated in small mammals.
The spacer (S) of a FSL construct has been selected so as to have negligible cross-reactivity with serum antibodies so kodecytes can be used with undiluted serum. By increasing the length of the FSL spacer from 1.9 to 7.2 nm it has been shown sensitivity can improve two-fold in red cell agglutination based kodecyte assays. However, increasing the size of the spacer further from 7.2 to 11.5 nm did not result in any further enhancement.
Technology Video
To view a simple video explaining how Kode Technology works, click the following link: https://www.youtube.com/watch?v=TIbjAl5KYpA
Methodology
FSL constructs, when in solution (saline) and in contact, will spontaneously incorporate into cell membranes. The methodology involves simply preparing a solution of FSL constructs in the range of 1–1000 μg/mL, with the concentration used determining the amount of antigen present on the kodecyte. The ability to control antigen levels on the outside of a kodecyte has allowed for manufacture of quality control sensitivity systems and serologic teaching kits incorporating the entire range of serologic agglutination reactions. The actual concentration will depend on the construct and the quantity of construct required in the membrane. One part of FSL solution is added to one part of cells (up to 100% suspension) and they are incubated at a set temperature within the range of depending on temperature compatibility of the cells being modified. The higher the temperature, the faster the rate of FSL insertion into the membrane. For red blood cells incubation for 2 hours at 37 °C achieves >95% FSL insertion with at least 50% insertion being achieved within 20 minutes. In general, for carbohydrate based FSLs insertion into red blood cells, incubation for 4 hours at room temperature or 20 hours at 4 °C are similar to one hour at 37 °C. The resultant kodecytes do not required to be washed, however this option should be considered if an excess of FSL construct is used in the "koding process".
Kodecytes can also be created in vivo by injection of constructs directly into the circulation. However this process will modify all cells in contact with the constructs and usually require significantly more construct than in vitro preparation, as FSL constructs will preferentially associate with free lipids.
The in vivo creation of kodecytes is untargeted and FSL constructs will insert into all cells non-specifically, but may show a preference for some cell types.
Diagnostic serological analyses including flow cytometry and scanning electron microscopy usually can't see a difference between "kodecytes" and unmodified cells. However, when compared with natural cells there does appear to be a difference between IgM and IgG antibody reactivities when the functional group (F) is a monomeric peptide antigen. IgM antibodies appear to react poorly with kodecytes made with FSL peptides. Furthermore, FSL constructs may have a restricted antigen/epitope and may not react with a monoclonal antibody unless the FSL construct and monoclonal antibody are complementary.
Kodecytes can be studied using standard histological techniques. Kodecytes can be fixed after "koding" subject to the functional moiety (F) of the FSL construct being compatible with the fixative. However, freeze cut or formalin-fixed freeze cut tissues are required because the lipid based FSL constructs (and other glycolipids) will be leached from the "kodecytes" in paraffin imbedded samples during the deparaffination steps.
Nomenclature
Koded membranes are described by the construct and the concentration of FSL (in μg/mL) used to create them. For example, kodecytes created with a 100 μg/mL solution of FSL-A would be termed A100 kodecytes. If multiple FSL constructs were used then the definition is expanded accordingly, e.g. A100+B300 kodecytes are created with a solution containing 100 μg/mL solution of FSL-A and 300 μg/mL solution of FSL-B. The "+" symbol is used to separate the construct mixes, e.g. A100+B300. If FSL concentrations are constant then the μg/mL component of the terminology can be dropped, e.g. A kodecytes. Alternatively unrelated constructs such as FSL-A and FSL-biotin will create A+biotin kodecytes, etc. If different cells are used in the same study then inclusion of the cell type into the name is recommended, e.g. RBC A100 kodecytes vs WBC A100 kodecytes, or platelet A100 kodecytes, etc.
Applications
Kode Technology has been used for the in vitro modification of murine embryos, spermatozoa, zebra fish, epithelial/endometrial cells and red blood cells to create cellular quality controls systems, serologic kits (teaching), rare antigen expression, add infectious markers onto cells, modified cell adhesion/interaction/separation/immobilisation, and labelling. It has also been intravascularly infused for in vivo modification of blood cells and neutralisation of circulating antibodies and in in vivo imaging of circulating bone marrow kodecytes in zebrafish. Kode FSL constructs have also been applied to non-biological surfaces such as modified cellulose, paper, silica, polymers, natural fibers, glass and metals and has been shown to be ultra-fast in labelling these surfaces.
See also
Function-Spacer-Lipid construct
Kodevirion
References
External links
Kodeycte.com
How Kode Technology works
Applications of kodecytes
CSL application of kodecytes
Biochemistry
Biotechnology
Laboratory techniques
Molecular biology techniques
Protein methods
Nanotechnology | Kodecyte | Chemistry,Materials_science,Engineering,Biology | 1,797 |
1,588,678 | https://en.wikipedia.org/wiki/Phenobarbital | Phenobarbital, also known as phenobarbitone or phenobarb, sold under the brand name Luminal among others, is a medication of the barbiturate type. It is recommended by the World Health Organization (WHO) for the treatment of certain types of epilepsy in developing countries. In the developed world, it is commonly used to treat seizures in young children, while other medications are generally used in older children and adults. It is also used for veterinary purposes.
It may be administered by slow intravenous infusion (IV infusion), intramuscularly (IM), or orally (swallowed by mouth). Subcutaneous administration is not recommended. The IV or IM (injectable forms) may be used to treat status epilepticus if other drugs fail to achieve satisfactory results. Phenobarbital is occasionally used to treat insomnia, anxiety, and benzodiazepine withdrawal (as well as withdrawal from certain other drugs in specific circumstances), and prior to surgery as an anxiolytic and to induce sedation. It usually begins working within five minutes when used intravenously and half an hour when administered orally. Its effects last for between four hours and two days.
Potentially serious side effects include a decreased level of consciousness and respiratory depressant. There is potential for both abuse and withdrawal following long-term use. It may also increase the risk of suicide.
It is pregnancy category D in Australia, meaning that it may cause harm when taken during pregnancy. If used during breastfeeding it may result in drowsiness in the baby. Phenobarbital works by increasing the activity of the inhibitory neurotransmitter GABA.
Phenobarbital was discovered in 1912 and is the oldest still commonly used anti-seizure medication. It is on the World Health Organization's List of Essential Medicines.
Medical uses
Phenobarbital is used in the treatment of all types of seizures, except absence seizures. It is no less effective at seizure control than phenytoin, but phenobarbital is not as well tolerated. Phenobarbital may provide a clinical advantage over carbamazepine for treating partial onset seizures. Carbamazepine may provide a clinical advantage over phenobarbital for generalized onset tonic-clonic seizures.
The first-line drugs for treatment of status epilepticus are benzodiazepines, such as lorazepam, clonazepam, midazolam, or diazepam. If these fail, then phenytoin may be used, with phenobarbital being an alternative in the US (favored in infants), but used only third-line in the UK. Failing that, the only treatment is anaesthesia in intensive care. The World Health Organization (WHO) gives phenobarbital a first-line recommendation in the developing world and it is commonly used there.
Phenobarbital is the first-line choice for the treatment of neonatal seizures. Concerns that neonatal seizures in themselves could be harmful make most physicians treat them aggressively. No reliable evidence, though, supports this approach.
Phenobarbital is sometimes used for alcohol detoxification and benzodiazepine detoxification for its sedative and anti-convulsant properties. The benzodiazepines chlordiazepoxide (Librium) and oxazepam (Serax) have largely replaced phenobarbital for detoxification.
Phenobarbital is useful for insomnia and anxiety.
Other uses
Phenobarbital properties can effectively reduce tremors and seizures associated with abrupt withdrawal from benzodiazepines.
Phenobarbital is occasionally prescribed in low doses to aid in the conjugation of bilirubin in people with Crigler–Najjar syndrome, type II, or in people with Gilbert's syndrome. In infants suspected of neonatal biliary atresia, phenobarbital is used in preparation for a 99mTc-IDA hepatobiliary (HIDA; hepatobiliary 99mTc-iminodiacetic acid) study that differentiates atresia from hepatitis or cholestasis.
In massive doses, phenobarbital is prescribed to terminally ill people to allow them to end their life through physician-assisted suicide.
Like other barbiturates, phenobarbital can be used recreationally, but this is reported to be relatively infrequent.
The synthesis of a photoswitchable analog (DASA-barbital) and phenobarbital has been described for use as a research compound in photopharmacology.
Side effects
Sedation and hypnosis are the principal side effects (occasionally, they are also the intended effects) of phenobarbital. Central nervous system effects, such as dizziness, nystagmus and ataxia, are also common. In elderly patients, it may cause excitement and confusion, while in children, it may result in paradoxical hyperactivity.
Phenobarbital is a cytochrome P450 hepatic enzyme inducer. It binds transcription factor receptors that activate cytochrome P450 transcription, thereby increasing its amount and thus its activity. Caution is to be used with children. Among anti-convulsant drugs, behavioural disturbances occur most frequently with clonazepam and phenobarbital.
Contraindications
Acute intermittent porphyria, hypersensitivity to any barbiturate, prior dependence on barbiturates, severe respiratory insufficiency (as with chronic obstructive pulmonary disease), severe liver failure, pregnancy, and breastfeeding are contraindications for phenobarbital use.
Overdose
Phenobarbital causes a depression of the body's systems, mainly the central and peripheral nervous systems. Thus, the main characteristic of phenobarbital overdose is a "slowing" of bodily functions, including decreased consciousness (even coma), bradycardia, bradypnea, hypothermia, and hypotension (in massive overdoses). Overdose may also lead to pulmonary edema and acute renal failure as a result of shock and can result in death.
The electroencephalogram (EEG) of a person with phenobarbital overdose may show a marked decrease in electrical activity, to the point of mimicking brain death. This is due to profound depression of the central nervous system and is usually reversible.
Treatment of phenobarbital overdose is supportive, and mainly consists of the maintenance of airway patency (through endotracheal intubation and mechanical ventilation), correction of bradycardia and hypotension (with intravenous fluids and vasopressors, if necessary), and removal of as much drug as possible from the body. In very large overdoses, multi-dose activated charcoal is a mainstay of treatment as the drug undergoes enterohepatic recirculation. Urine alkalization (achieved with sodium bicarbonate) enhances renal excretion. Hemodialysis is effective in removing phenobarbital from the body and may reduce its half-life by up to 90%. No specific antidote for barbiturate poisoning is available.
Mechanism of action
Phenobarbital acts as an allosteric modulator which extends the amount of time the chloride ion channel is open by interacting with GABAA receptor subunits. Through this action, phenobarbital increases the flow of chloride ions into the neuron which decreases the excitability of the post-synaptic neuron. Hyperpolarizing this post-synaptic membrane leads to a decrease in the general excitatory aspects of the post-synaptic neuron. By making it harder to depolarize the neuron, the threshold for the action potential of the post-synaptic neuron will be increased.
Direct blockade of glutamatergic AMPA and kainate receptors are also believed to contribute to the hypnotic/anticonvulsant effect that is observed with phenobarbital.
Pharmacokinetics
Phenobarbital has an oral bioavailability of about 90%. Peak plasma concentrations (Cmax) are reached eight to 12 hours after oral administration. It is one of the longest-acting barbiturates available – it remains in the body for a very long time (half-life of two to seven days) and has very low protein binding (20 to 45%). Phenobarbital is metabolized by the liver, mainly through hydroxylation and glucuronidation and induces many isozymes of the cytochrome P450 system. Cytochrome P450 2B6 (CYP2B6) is specifically induced by phenobarbital via the CAR/RXR nuclear receptor heterodimer. It is excreted primarily by the kidneys.
History
The first barbiturate drug, barbital, was synthesized in 1902 by German chemists Emil Fischer and Joseph von Mering and was first marketed as Veronal by Friedr. Bayer et comp. By 1904, several related drugs, including phenobarbital, had been synthesized by Fischer. Phenobarbital was brought to market in 1912 by the drug company Bayer as the brand Luminal. It remained a commonly prescribed sedative and hypnotic until the introduction of benzodiazepines in the 1960s.
Phenobarbital's soporific, sedative and hypnotic properties were well known in 1912, but it was not yet known to be an effective anti-convulsant. The young doctor Alfred Hauptmann gave it to his epilepsy patients as a tranquilizer and discovered their seizures were susceptible to the drug. Hauptmann performed a careful study of his patients over an extended period. Most of these patients were using the only effective drug then available, bromide, which had terrible side effects and limited efficacy. On phenobarbital, their epilepsy was much improved: The worst patients had fewer and lighter seizures and some patients became seizure-free. In addition, they improved physically and mentally as bromides were removed from their regimen. Patients who had been institutionalised due to the severity of their epilepsy were able to leave and, in some cases, resume employment. Hauptmann dismissed concerns that its effectiveness in stalling seizures could lead to patients developing a build-up that needed to be "discharged". As he expected, withdrawal of the drug led to an increase in seizure frequency – it was not a cure. The drug was quickly adopted as the first widely effective anti-convulsant, though World War I delayed its introduction in the U.S.
In 1939, a German family asked Adolf Hitler to have their disabled son killed; the five-month-old boy was given a lethal dose of Luminal after Hitler sent his own doctor to examine him. A few days later 15 psychiatrists were summoned to Hitler's Chancellery and directed to commence a clandestine program of involuntary euthanasia.
In 1940, at a clinic in Ansbach, Germany, around 50 intellectually disabled children were injected with Luminal and killed that way. A plaque was erected in their memory in 1988 in the local hospital at Feuchtwanger Strasse 38, although a newer plaque does not mention that patients were killed using barbiturates on site. Luminal was used in the Nazi children's euthanasia program until at least 1943.
Phenobarbital was used to treat neonatal jaundice by increasing liver metabolism and thus lowering bilirubin levels. In the 1950s, phototherapy was discovered, and became the standard treatment.
Phenobarbital was used for over 25 years as prophylaxis in the treatment of febrile seizures. Although an effective treatment in preventing recurrent febrile seizures, it had no positive effect on patient outcome or risk of developing epilepsy. The treatment of simple febrile seizures with anticonvulsant prophylaxis is no longer recommended.
Society and culture
Names
Phenobarbital is the INN and phenobarbitone is the BAN.
Synthesis
Barbiturate drugs are obtained via condensation reactions between a derivative of diethyl malonate and urea in the presence of a strong base. The synthesis of phenobarbital uses this common approach as well but differs in the way in which this malonate derivative is obtained. The reason for this difference is because aryl halides do not typically undergo nucleophilic substitution in Malonic ester synthesis in the same way as aliphatic organosulfates or halocarbons do. To overcome this lack of chemical reactivity two dominant synthetic approaches using benzyl cyanide as a starting material have been developed:
The first of these methods consists of a Pinner reaction of benzyl cyanide, giving phenylacetic acid ethyl ester. Subsequently, this ester undergoes cross Claisen condensation using diethyl oxalate, giving diethyl ester of phenyloxobutandioic acid. Upon heating this intermediate easily loses carbon monoxide, yielding diethyl phenylmalonate. Malonic ester synthesis using ethyl bromide leads to the formation of α-phenyl-α-ethylmalonic ester. Finally, a condensation reaction with urea gives phenobarbital.
The second approach utilizes diethyl carbonate in the presence of a strong base to give α-phenylcyanoacetic ester. Alkylation of this ester using ethyl bromide proceeds via a nitrile anion intermediate to give the α-phenyl-α-ethylcyanoacetic ester. This product is then further converted into the 4-iminoderivative upon condensation with urea. Finally acidic hydrolysis of the resulting product gives phenobarbital.
A new synthetic route based on diethyl 2-ethyl-2-phenylmalonate and urea has been described.
Regulation
The level of regulation includes Schedule IV non-narcotic (depressant) (ACSCN 2285) in the United States under the Controlled Substances Act 1970—but along with a few other barbiturates and at least one benzodiazepine, and codeine, dionine, or dihydrocodeine at low concentrations, it also has exempt prescription and had at least one exempt OTC combination drug now more tightly regulated for its ephedrine content. The phenobarbitone/phenobarbital exists in subtherapeutic doses which add up to an effective dose to counter the overstimulation and possible seizures from a deliberate overdose in ephedrine tablets for asthma, which are now regulated at the federal and state level as: a restricted OTC medicine and/or watched precursor, uncontrolled but watched/restricted prescription drug & watched precursor, a Schedule II, III, IV, or V prescription-only controlled substance & watched precursor, or a Schedule V (which also has possible regulations at the county/parish, town, city, or district as well aside from the fact that the pharmacist can also choose not to sell it, and photo ID and signing a register is required) exempt Non-Narcotic restricted/watched OTC medicine.
Selected overdoses
A mysterious woman, known as the Isdal Woman, was found dead in Bergen, Norway, on 29 November 1970. Her death was caused by some combination of burns, phenobarbital, and carbon monoxide poisoning; many theories about her death have been posited, and it is believed that she may have been a spy.
British veterinarian Donald Sinclair, better known as the character Siegfried Farnon in the "All Creatures Great and Small" book series by James Herriot, committed suicide at the age of 84 by injecting himself with an overdose of phenobarbital. Activist Abbie Hoffman also committed suicide by consuming phenobarbital, combined with alcohol, on 12 April 1989; the residue of around 150 pills was found in his body at autopsy.
Thirty-nine members of the Heaven's Gate UFO cult committed mass suicide in March 1997 by drinking a lethal dose of phenobarbital and vodka "and then lay down to die" hoping to enter an alien spacecraft.
Veterinary uses
Phenobarbital is one of the first-line drugs of choice to treat epilepsy in dogs, as well as cats.
It is also used to treat feline hyperesthesia syndrome in cats when anti-obsessional therapies prove ineffective.
It may also be used to treat seizures in horses when benzodiazepine treatment has failed or is contraindicated.
References
Barbiturates
Anxiolytics
CYP3A4 inducers
Hypnotics
World Health Organization essential medicines
IARC Group 2B carcinogens
Drugs developed by Bayer
Wikipedia medicine articles ready to translate | Phenobarbital | Biology | 3,581 |
51,873,625 | https://en.wikipedia.org/wiki/Welcome%20to%20the%20Universe | Welcome to the Universe: An Astrophysical Tour is a popular science book by Neil deGrasse Tyson, Michael A. Strauss, and J. Richard Gott, based on an introductory astrophysics course they co-taught at Princeton University. The book was published by the Princeton University Press on September 20, 2016.
Reception
Welcome to the Universe: An Astrophysical Tour has been praised by literary critics. Kirkus Reviews described the book as "an accessible and comprehensive overview of our universe by three eminent astrophysicists" and "an entertaining introduction to astronomy." John Timpane of The Philadelphia Inquirer similarly called it "a well-illustrated tour that includes Pluto, questions of intelligent life, and whether the universe is infinite." Publishers Weekly wrote:
References
Books by Neil deGrasse Tyson
Astronomy books
Cosmology books
2016 non-fiction books
Popular physics books
Princeton University Press books | Welcome to the Universe | Astronomy | 181 |
5,474,056 | https://en.wikipedia.org/wiki/Distributed%20design%20patterns | In software engineering, a distributed design pattern is a design pattern focused on distributed computing problems.
Classification
Distributed design patterns can be divided into several groups:
Distributed communication patterns
Security and reliability patterns
Event driven patterns
Saga pattern
Examples
MapReduce
Bulk synchronous parallel
Remote Session
See also
Software engineering
List of software engineering topics
References
Software design patterns
Distributed computing architecture | Distributed design patterns | Technology | 71 |
3,908,779 | https://en.wikipedia.org/wiki/SOX%20gene%20family | SOX genes (SRY-related HMG-box genes) encode a family of transcription factors that bind to the minor groove in DNA, and belong to a super-family of genes characterized by a homologous sequence called the HMG-box (for high mobility group). This HMG box is a DNA binding domain that is highly conserved throughout eukaryotic species. Homologues have been identified in insects, nematodes, amphibians, reptiles, birds and a range of mammals. However, HMG boxes can be very diverse in nature, with only a few amino acids being conserved between species.
Sox genes are defined as containing the HMG box of a gene involved in sex determination called SRY, which resides on the Y-chromosome. There are 20 SOX genes present in humans and mice, and 8 present in Drosophila. Almost all Sox genes show at least 50% amino acid similarity with the HMG box in Sry. The family is divided into subgroups according to homology within the HMG domain and other structural motifs, as well as according to functional assays.
The developmentally important Sox family has no singular function, and many members possess the ability to regulate several different aspects of development. While many Sox genes are involved in sex determination, some are also important in processes such as neuronal development. For example, Sox2 and Sox3 are involved in the transition of epithelial granule cells in the cerebellum to their migratory state. Sox 2 is also a transcription factor in the maintenance of pluripotency in both Early Embryos and ES Cells. Granule cells then differentiate to granule neurons, with Sox11 being involved in this process. It is thought that some Sox genes may be useful in the early diagnosis of childhood brain tumours due to this sequential expression in the cerebellum, making them a target for significant research.
Sox proteins bind to the sequence WWCAAW and similar sequences (W=A or T). They have weak binding specificity and unusually low affinity for DNA. Sox genes are related to the Tcf/Lef1 group of genes which also contain a sequence-specific high mobility group and have a similar sequence specificity (roughly TWWCAAAG).
Groups
Sox genes are classified into groups. Sox genes from different groups share little similarity outside the DNA-binding domain. In mouse and human the members of the groups are:
SoxA: SRY
SoxB1: SOX1, SOX2, SOX3
SoxB2: SOX14, SOX21
SoxC: SOX4, SOX11, SOX12
SoxD: SOX5, SOX6, SOX13
SoxE: SOX8, SOX9, SOX10
SoxF: SOX7, SOX17, SOX18
SoxG: SOX15
SoxH: SOX30
See also
Body plan
Evolutionary developmental biology
FOX proteins
Hox gene
Pax genes
References
External links
NCBI CDD: cd01388 (SOX-TCF_HMG-box); human proteins
Gene families
Transcription factors | SOX gene family | Chemistry,Biology | 624 |
21,839,519 | https://en.wikipedia.org/wiki/Genius%20%28mathematics%20software%29 | Genius (also known as the Genius Math Tool) is a free open-source numerical computing environment and programming language, similar in some aspects to MATLAB, GNU Octave, Mathematica and Maple. Genius is aimed at mathematical experimentation rather than computationally intensive tasks. It is also very useful as just a calculator. The programming language is called GEL and aims to have a mathematically friendly syntax. The software comes with a command-line interface and a GUI, which uses the GTK+ libraries. The graphical version supports both 2D and 3D plotting. The graphical version includes a set of tutorials originally aimed at in class demonstrations.
History
Genius was the original calculator for the GNOME project started in 1997, but was split into a separate project soon after the 0.13 release of GNOME in 1998. Because of this ancestry, it was also known as Genius Calculator or GNOME Genius. There was an attempt to merge Genius and the Dr. Geo interactive geometry software, but this merge never materialized. Version 1.0 was released in 2007 almost 10 years after the initial release.
Example GEL source code
Here is a sample definition of a function calculating the factorial recursively
function f(x) = (
if x <= 1 then
1
else
(f(x-1)*x)
)
GEL contains primitives for writing the product iteratively and hence we can get the following iterative
version
function f(x) = prod k=1 to x do k
See also
Comparison of numerical analysis software
Notes and references
Array programming languages
Free educational software
Free mathematics software
Free software programmed in C
Numerical analysis software for Linux
Numerical analysis software for macOS
Numerical programming languages
Science software that uses GTK
Unix programming tools | Genius (mathematics software) | Mathematics | 349 |
66,616,096 | https://en.wikipedia.org/wiki/Peter%20Masefield | Sir Peter Masefield (19 March 1914 – 14 February 2006) was a leading figure in Britain's post war aviation industry, as Chief Executive of British European Airways in the 1950s, and chairman of the British Airports Authority in the 1960s.
Life
Peter Gordon Masefield was born in 1914 in Trentham, Staffordshire. Initially educated at Westminster School and Chillon College in Montreux, Switzerland, Masefield studied engineering at Jesus College, Cambridge. Following a childhood fascination with aircraft, Masefield gained his pilot's licence while in Cambridge which he maintained for the rest of his career.
Masefield initially worked as a junior draughtsman at Fairey Aviation from 1935 to 1937, before moving into journalism on the staff of The Aeroplane 1937–1943. He became aviation correspondent for The Sunday Times, and at the start of the Second World War was sent to France to cover the RAF Advanced Air Striking Force.
Turned down by the Royal Air Force as a pilot due to poor vision, Masefield flew with the United States Army Air Forces (USAAF) as an occasional co-pilot and air gunner while maintaining his journalism career. A daylight raid on Le Bourget in 1943 led to the nose of his Boeing B-17 Flying Fortress being blown off, with a consequential crash landing in East Anglia, luckily without injury.
Following a 1943 article by Masefield criticising the Ministry of Aircraft Production, Lord Beaverbrook removed him from active service, appointing him as his personal advisor and Secretary of the Brabazon Committee, which planned for post-war British civil aviation. Masefield also accompanied Beaverbrook to Washington DC for talks that led to the creation of the International Civil Aviation Organization. Masefield also played a major part in the 1946 negotiations of the Bermuda Agreement – which governed air services and routes between the United States and the UK.
British European Airways
In 1949 Lord Douglas (Marshal of the Royal Air Force), then-chairman of British European Airways (BEA) made Masefield chief executive, despite Masefield being just 35. Controlling a large number of staff on a small budget, tight cost control measures were combined with innovative methods to boost revenue and passenger loads – such as off-peak fares on late evening flights and high frequency services on popular routes. This commercially aggressive approach including resulted in monthly earnings of £1 million, and BEA was profitable by 1955. Other successes included ordering the Vickers Viscount turboprop airliner – which became the leading short-haul aircraft in Europe by the mid-1950s – and resisting the potential merger of British Overseas Airways Corporation (BOAC) with BEA.
Aircraft production
After seven years Masefield went to work for Bristol Aircraft, with the aim of Britain continuing as a major player in civil aviation. However, the introduction of the turboprop Bristol Britannia was late, and it could not compete with the Boeing 707 jetliner and the start of the Jet Age. In 1960 Masefield formed Beagle Aircraft Limited with the financial support of the Pressed Steel Company, which incorporated Auster Aircraft Company and F.G Miles Limited by 1962.
British Airports Authority
In 1965 Masefield was made chairman of the British Airports Authority (BAA), which took over management of the major airports in the UK. Owing to the Jet Age, passenger numbers increased by 62% to 20 million a year, with profits of £38m. However, Masefield disagreed with the government regarding plans for a proposed airport at Maplin Sands, and some politicians called for him to be dismissed. A second five-year term running BAA was not forthcoming, and he retired from the chairmanship at the end of 1976, to be succeeded in the new year by Nigel Foulkes.
Following this, Masefield had a variety of roles, including deputy chairman at British Caledonian and president of the Royal Aeronautical Society.
London Transport
Masefield joined the board of London Transport in 1973. In 1980, Sir Horace Cutler, leader of the Greater London Council asked Masefield to become chairman of London Transport, a job he did for two years. During the period, investment on the London Underground was not substantial, which has been subsequently criticised. Masefield retired from the role in 1982, aged 67.
In the following years Masefield remained active as a chairperson, director and committee member for a wide variety of trusts, committees and museums – including Brooklands Museum (being the first chairman of its trustees), the British Association of Aviation Consultants and the Croydon Airport society. He was president of the British Aviation Preservation Council (now Aviation Heritage UK) and also became an author, writing a history of the R101 airship, as well as an autobiography.
Masefield was knighted in 1972. He died on 14 February 2006, aged 91.
References
British European Airways
Heathrow Airport Holdings
People associated with transport in London
British public transport executives
1914 births
2006 deaths
20th-century English businesspeople
Knights Bachelor
Alumni of Jesus College, Cambridge
Bristol Aeroplane Company
Royal Aeronautical Society
People from Trentham, Staffordshire | Peter Masefield | Engineering | 1,019 |
3,045,792 | https://en.wikipedia.org/wiki/Foresight%20%28futures%20studies%29 | In futurology, especially in Europe, the term foresight has become widely used to describe activities such as:
critical thinking concerning long-term developments,
debate,
wider participatory democracy, and
shaping the future, especially by influencing public policy.
In the last decade, scenario methods, for example, have become widely used in some European countries in policy-making. The FORSOCIETY network brings together national Foresight teams from most European countries, and the European Foresight Monitoring Project is collating material on Foresight activities around the world. Foresight methods are used more and more in regional planning and decision–making (“regional foresight”). Several non-European think tanks, like Strategic Foresight Group, also engage in foresight studies.
The foresight of futurology is also known as strategic foresight. This foresight used by and describing professional futurists trained in Master's programs is the research-driven practice of exploring expected and alternative futures and guiding futures to inform strategy. Foresight includes understanding the relevant recent past; scanning to collect insight about present, futuring to describe the understood future including trend research; environment research to explore possible trend breaks from developments on the fringe and other divergencies that may lead to alternative futures; visioning to define preferred future states; designing strategies to craft this future; and adapting the present forces to implement this plan. There is notable but not complete overlap between foresight and strategic planning, change management, forecasting, and design thinking.
At the same time, the use of foresight for companies (“corporate foresight”) is becoming more professional and widespread Corporate foresight is used to support strategic management, identify new business fields and increase the innovation capacity of a firm.
Foresight is not the same as futures research or strategic planning. It encompasses a range of approaches that combine the three components mentioned above, which may be recast as:
futures (forecasting, forward thinking, prospectives),
planning (strategic analysis, priority setting), and
networking (participatory, dialogic) tools and orientations.
Much futurology research has been rather ivory tower work, but Foresight programmes were designed to influence policy - often R&D policy. Much technology policy had been very elitist; Foresight attempts to go beyond the "usual suspects" and gather widely distributed intelligence. These three lines of work were already common in Francophone futures studies going by the name la prospective. In the 1990s, an explosion of systematic organisation of these methods began in large-scale TECHNOLOGY FORESIGHT programmes in Europe and elsewhere.
Foresight thus draws on traditions of work in long-range planning and strategic planning, horizontal policymaking and democratic planning, and participatory futurology - but was also highly influenced by systemic approaches to innovation studies, science and technology policy, and analysis of "critical technologies".
Many of the methods that are commonly associated with Foresight - Delphi surveys, scenario workshops, etc. - derive from futurology. So does the fact that Foresight is concerned with:
The longer-term - futures that are usually at least 10 years away (though there are some exceptions to this, especially in its use in private business). Since Foresight is action-oriented (the planning link) it will rarely be oriented to perspectives beyond a few decades out (though where decisions like aircraft design, power station construction or other major infrastructural decisions are concerned, then the planning horizon may well be half a century).
Alternative futures: it is helpful to examine alternative paths of development, not just what is currently believed to be most likely or business as usual. Often Foresight will construct multiple scenarios. These may be an interim step on the way to creating what may be known as positive visions, success scenarios, aspirational futures. Sometimes alternative scenarios will be a major part of the output of Foresight work, with the decision about what future to build being left to other mechanisms.
See also
Accelerating change
Emerging technologies
Foresight Institute
Forecasting
Horizon scanning
Optimism bias
Reference class forecasting
Scenario planning
Strategic foresight
Strategic Foresight Group
Technology forecasting
Technology Scouting
References
Further reading
There are numerous journals that deal with research on foresight:
Technological Forecasting and Social Change
Futures
Futures & Foresight Science
European Journal of Futures Research
Foresight
Research focusing more on the combination of foresight and national R&D policy can be found in International Journal of Foresight and Innovation Policy
External links
The FORLEARN Online Guide developed by the Institute for Prospective Technological Studies of the European Commission
The Foresight Programme of UNIDO, the Investment and Technology Promotion Branch of the United Nations Industrial Development Organization.
Handbook of Knowledge Society Foresight published by the European Foundation, Dublin
Foresight (futures studies)
Transhumanism | Foresight (futures studies) | Technology,Engineering,Biology | 968 |
3,824,919 | https://en.wikipedia.org/wiki/Vapor%20recovery | Vapor (or vapour) recovery is the process of collecting the vapors of gasoline and other fuels, so that they do not escape into the atmosphere. This is often done (and sometimes required by law) at filling stations, to reduce noxious and potentially explosive fumes and pollution.
The negative pressure created by a vacuum pump typically located in the fuel dispenser, combined with the pressure in the car's fuel tank caused by the inflow, is usually used to pull in the vapors. They are drawn in through holes in the side of the nozzle and travel along a return path through another hose.
In 1975 the Vapor Recovery Gasoline Nozzle was an improvement on the idea of the original gasoline nozzle delivery system.
The improved idea was the brain child of Mark Maine of San Diego, California, where Mark was a gas station attendant at a corporate owned and operated Chevron U.S.A. service station. The story is, after watching the tanker truck driver deliver gasoline to the station using two hoses, one to deliver the gasoline from the tanker, and the other hose to recover the escaping gasoline vapors back into the emptying tanker. Mark talked with the driver to understand why the two hose system was used, and also why it was not implemented on the standard delivery nozzle, allowing vapors to escape from the vehicle gas tank. After the tanker driver left, Mark drew an idea for a Vapor Recovery Gasoline Nozzle and submitted it to the Chevron Station Management as an employee suggestion.
Mark was included in the design and development as the original Vapor recovery gasoline nozzle, which was manufactured and delivered by Huddleson. Mark was also promoted from the Chevron Service Station to an executive position based out of the Corporate in La Habra, California. Mark was appointed as the Vapor Recovery Gasoline Nozzle executive for the 2 year implementation program, his duties were to train and oversee the installation and maintenance of 124 Chevron Service Stations within San Diego County.
Chevron USA lobbied California Law Makers, and the law was changed to require the new improved Vapor Recovery Gasoline Nozzle delivery system state wide and eventually such followed across the USA.
In Australia, vapor recovery has become mandatory in major urban areas. There are two categories - VR1 and VR2. VR1 must be installed at fuel stations that pump less than 500,000 litres annually, VR2 must be installed for larger amounts, or as designated by various EPA bodies.
Other industries
Vapor recovery is also used in the chemical process industry to remove and recover vapors from storage tanks. The vapors are usually either environmentally hazardous, or valuable. The process consists of a closed venting system from the storage tank ullage space to a vapor recovery unit which will recover the vapors for return to the process or destroy them, usually by oxidation.
Vapor recovery towers are also used in the oil and gas industry to provide flash gas recovery at near atmospheric pressure without the chance of oxygen ingress at the top of the storage tanks. The ability to create the vapor flash inside the tower often reduces storage tank emissions to less than six tons per year, exempting the tank battery from Quad O reporting requirements.
The identifiable benefits from an organizational stand point behind vapor recovery is that it helps to make the industry more sustainable and creates a pipeline for pumping exhausts back into production.
See also
Automobile emissions control
Onboard refueling vapor recovery
References
External links
Quad O Regulations from EPA.gov (affects the Oil & Gas Industry)
EPA Gas STAR Program use of vapor recovery to capture methane in oil and gas industry
Use of Vapor Recovery Units and Towers from epa.gov
Gases
Gas technologies
Pollution control technologies | Vapor recovery | Physics,Chemistry,Engineering | 733 |
6,556,971 | https://en.wikipedia.org/wiki/Outline%20of%20calculus | Calculus is a branch of mathematics focused on limits, functions, derivatives, integrals, and infinite series. This subject constitutes a major part of contemporary mathematics education. Calculus has widespread applications in science, economics, and engineering and can solve many problems for which algebra alone is insufficient.
Branches of calculus
Differential calculus
Integral calculus
Multivariable calculus
Fractional calculus
Differential Geometry
History of calculus
History of calculus
Important publications in calculus
General calculus concepts
Continuous function
Derivative
Fundamental theorem of calculus
Integral
Limit
Non-standard analysis
Partial derivative
Infinite Series
Calculus scholars
Sir Isaac Newton
Gottfried Leibniz
Calculus lists
List of calculus topics
See also
Glossary of calculus
Table of mathematical symbols
References
External links
Calculus Made Easy (1914) by Silvanus P. Thompson Full text in PDF
Calculus.org: The Calculus page at University of California, Davis – contains resources and links to other sites
COW: Calculus on the Web at Temple University - contains resources ranging from pre-calculus and associated algebra
Online Integrator (WebMathematica) from Wolfram Research
The Role of Calculus in College Mathematics from ERICDigests.org
OpenCourseWare Calculus from the Massachusetts Institute of Technology
Infinitesimal Calculus – an article on its historical development, in Encyclopaedia of Mathematics, Michiel Hazewinkel ed.
Calculus
Calculus
Calculus | Outline of calculus | Mathematics | 260 |
40,080,096 | https://en.wikipedia.org/wiki/Bedridden | Being bedridden is a form of immobility that can present as the inability to move or even sit upright. It differs from bed-rest, a form of non-invasive treatment that is usually part of recovery or the limitation of activities. Some of the more serious consequences of being bedridden is the high risk of developing thrombosis and muscle wasting (atrophy).
Etymology
The word "bedridden" is derived from the Middle English term bedrid, the past tense form of riding a bed, which dates back to the 14th century.
Bed rest
This is a voluntary medical treatment still used today to help cure illness. Current views regarding this treatment are that there are no benefits for most conditions studied. Though bedrest may still be prescribed for pregnant women, it is now considered dangerous. Those who are bedridden can develop complications related to feeding.
Complications
Being bedridden leads to many complications such as loss of muscle strength and endurance. Contractures, osteoporosis from disuse and the degeneration of joints can occur. Being confined to bed can add to the likelihood of developing an increased heart rate, decreased cardiac output, hypertension, and thromboembolism. People with disabilities who are bedridden are at risk for developing pressure sores. Those who are bedridden are at risk in a house fire due to their lack of mobility. Showering can become impossible. Bedsores develop if a person spends most or all of the day in bed without changing position Being confined to bed may result in a person remaining passive and withdrawn. The ability to transfer to a chair and the negative attitudes of caregivers is associated with continued confinement to bed and reduction of such requests. Those who are confined to bed have risks related to falls. Falling from a bed can result in injury.
Prevention
One recommendation for preventing the complications of being bedridden is to eat a healthy, well-balanced diet that contains enough calories and protein needed for optimum health. If someone is confined to a bed, changing position at least every two hours can help prevent complications in addition to changing sheeting and bedclothes immediately if they are soiled, and using items that can help reduce pressure, such as pillows or foam padding.
Studies
One Indian study of care given to bedridden individuals at home found that family members made up 82% of caregivers. A high rate of medical complications was reported, including pressure ulcers and urinary tract infections.
References
Living arrangements
Pejorative terms for people with disabilities
Accessibility
Culture of beds | Bedridden | Engineering | 525 |
4,340,898 | https://en.wikipedia.org/wiki/Shannon%E2%80%93Weaver%20model | The Shannon–Weaver model is one of the first models of communication. Initially published in the 1948 paper "A Mathematical Theory of Communication", it explains communication in terms of five basic components: a source, a transmitter, a channel, a receiver, and a destination. The source produces the original message. The transmitter translates the message into a signal, which is sent using a channel. The receiver translates the signal back into the original message and makes it available to the destination. For a landline phone call, the person calling is the source. They use the telephone as a transmitter, which produces an electric signal that is sent through the wire as a channel. The person receiving the call is the destination and their telephone is the receiver.
Shannon and Weaver distinguish three types of problems of communication: technical, semantic, and effectiveness problems. They focus on the technical level, which concerns the problem of how to use a signal to accurately reproduce a message from one location to another location. The difficulty in this regard is that noise may distort the signal. They discuss redundancy as a solution to this problem: if the original message is redundant then the distortions can be detected, which makes it possible to reconstruct the source's original intention.
The Shannon–Weaver model of communication has been influential in various fields, including communication theory and information theory. Many later theorists have built their own models on its insights. However, it is often criticized based on the claim that it oversimplifies communication. One common objection is that communication should not be understood as a one-way process but as a dynamic interaction of messages going back and forth between both participants. Another criticism rejects the idea that the message exists prior to the communication and argues instead that the encoding is itself a creative process that creates the content.
Overview and basic components
The Shannon–Weaver model is one of the earliest models of communication. It was initially published by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication". The model was further developed together with Warren Weaver in their co-authored 1949 book The Mathematical Theory of Communication. It aims to provide a formal representation of the basic elements and relations involved in the process of communication.
The model consists of five basic components: a source, a transmitter, a channel, a receiver, and a destination. The source of information is usually a person and decides which message to send. The message can take various forms, such as a sequence of letters, sounds, or images. The transmitter is responsible for translating the message into a signal. To send the signal, a channel is required. Channels are ways of transmitting signals, like light, sound waves, radio waves, and electrical wires. The receiver performs the opposite function of the transmitter: it translates the signal back into a message and makes it available to the destination. The destination is the person for whom the message was intended.
Shannon and Weaver focus on telephonic conversation as the paradigmatic case of how messages are produced and transmitted through a channel. But their model is intended as a general model that can be applied to any form of communication. For a regular face-to-face conversation, the person talking is the source, the mouth is the transmitter, the air is the channel transmitting the sound waves, the listener is the destination, and the ear is the receiver. In the case of a landline phone call, the source is the person calling, the transmitter is their telephone, the channel is the wire, the receiver is another telephone and the destination is the person using the second telephone. To apply this model accurately to real-life cases, some of the components may have to be repeated. For the telephone call, for example, the mouth is also a transmitter before the telephone itself as a second transmitter.
Problems of communication
Shannon and Weaver identify and address problems in the study of communication at three basic levels: technical, semantic, and effectiveness problems (referred to as levels A, B, and C). Shannon and Weaver hold that models of communication should provide good responses to all three problems, ideally by showing how to make communication more accurate and efficient. The prime focus of their model is the technical level, which concerns the issue of how to accurately reproduce a message from one location to another location. For this problem, it is not relevant what meaning the message carries. By contrast, it is only relevant that the message can be distinguished from different possible messages that could have been sent instead of it.
Semantic problems go beyond the symbols themselves and ask how they convey meaning. Shannon and Weaver assumed that the meaning is already contained in the message but many subsequent communication theorists have further problematized this point by including the influence of cultural factors and the context in their models. The effectiveness problem is based on the idea that the person sending the message has some goal in mind concerning how the person receiving the message is going to react. In this regard, effectivity means that the reaction matches the speaker's goal.The problem of effectivity concerns the question of how to achieve this. Many critics have rejected this aspect of Shannon and Weaver's theory since it seems to equate communication with manipulation or propaganda.
Noise and redundancy
To solve the technical problem at level A, it is necessary for the receiver to reconstruct the original message from the signal. However, various forms of noise can interfere and distort it. Noise is not intended by the source and makes it harder for the receiver to reconstruct the source's intention found in the original message. Crackling sounds during a telephone call or snow on a television screen are examples of noise. One way to solve this problem is to make the information in the message partially redundant. This way, distortions can often be identified and the original meaning can be reconstructed. A very basic form of redundancy is to repeat the same message several times. But redundancy can take various other forms as well. For example, the English language is redundant in the sense that many possible combinations of letters are meaningless. So the term "comming" does not have a distinct meaning. For this reason, it can be identified as a misspelling of the term "coming", thus revealing the source's original intention. Redundancy makes it easier to detect distortions but its drawback is that messages carry less information.
Influence and criticism
The Shannon–Weaver model of communication has been influential, inspiring subsequent work in the field of communication studies. Erik Hollnagel and David D. Woods even characterize it as the "mother of all models." It has been widely adopted in various other fields, including information theory, organizational analysis, and psychology. Many later theorists expanded this model by including additional elements in order to take into account other aspects of communication. For example, Wilbur Schramm includes a feedback loop to understand communication as an interactive process and George Gerbner emphasizes the relation between communication and the reality to which the communication refers. Some of these models, like Gerbner's, are equally universal in that they apply to any form of communication. Others apply to more specific areas. For example, Lasswell's model and Westley and MacLean's model are specifically formulated for mass media. Shannon's concepts were also popularized in John Robinson Pierce's Symbols, Signals, and Noise, which introduces the topic to non-specialists.
Many criticisms of the Shannon–Weaver model focus on its simplicity by pointing out that it leaves out vital aspects of communication. In this regard, it has been characterized as "inappropriate for analyzing social processes" and as a "misleading misrepresentation of the nature of human communication". A common objection is based on the fact that it is a linear transmission model: it conceptualizes communication as a one-way process going from a source to a destination. Against this approach, it is argued that communication is usually more interactive with messages and feedback going back and forth between the participants. This approach is implemented by non-linear transmission models, also termed interaction models. They include Wilbur Schramm's model, Frank Dance's helical-spiral model, a circular model developed by Lee Thayer, and the "sawtooth" model due to Paul Watzlawick, Janet Beavin, and Don Jackson. These approaches emphasize the dynamic nature of communication by showing how the process evolves as a multi-directional exchange of messages.
Another criticism focuses on the fact that Shannon and Weaver understand the message as a form of preexisting information. I. A. Richards criticizes this approach for treating the message as a preestablished entity that is merely packaged by the transmitter and later unpackaged by the receiver. This outlook is characteristic of all transmission models. They contrast with constitutive models, which see meanings as "reflexively constructed, maintained, or negotiated in the act of communicating". Richards argues that the message does not exist before it is articulated. This means that the encoding is itself a creative process that creates the content. Before it, there is a need to articulate oneself but no precise pre-existing content. The communicative process may not just affect the meaning of the message but also the social identities of the communicators, which are established and modified in the ongoing communicative process.
References
Information theory
Claude Shannon
Communication
Communication studies | Shannon–Weaver model | Mathematics,Technology,Engineering | 1,888 |
57,915,114 | https://en.wikipedia.org/wiki/Roland%20Cloud | Roland Cloud is a subscription-based collection of VST instruments and 'RVR' sample libraries launched in early 2018 by Roland. Instrument downloads and installation are handled by Roland's Cloud Manager software.
The software instruments available via Roland Cloud also include features that were not available in the original hardware instruments from which they were based. They are produced by Roland along with Virtual Sonics, an audio company founded by video game composer Jeremy Soule and his brother Julian.
Roland Cloud Manager
Roland Cloud Manager manages the user's instrument library and sound sources, and auto-updates by default.
Concerto
Concerto is a plugin which allows the usage of Roland's RVR format instruments. These instruments include the FLAVR series as well as the Tera Series and others.
Platforms
Roland Cloud Manager is available for both PC and Mac, and requires a 64-bit DAW.
Subscription model and pricing
Based on a subscription model, as of January 2020, users pay $19.95 USD per month to access the catalogue of instruments (£18.5 GBP, €21 EUR or ¥2190 JPY), with discounts for committing to 12, 24 or 60 months at a time (12%, 27% and 33% off respectively). Alternatively, users can purchase an instrument outright for a one-time fee.
According to the FAQs, an internet connection is required at least once per month in order to authenticate plugins in Roland Cloud.
Instruments
Aira
System-1
System-8
Flavr
Blip Blop
Electrode
Funky Fever
Grit
Resin
Sector-7 (formally Midnight and has new effects added to it)
Sugar
Trapped
Legendary
D-50
Jupiter-8
Juno-106
JV-1080
JX-3P
Promars
SH-2
SH-101
Sound Canvas VA
SRX Keyboards
SRX Orchestra
SRX World
SRX Dance Trax
SRX Studio
System 100
TB-303
TR-606
TR-808
TR-909
XV-5080
Tera
Tera Guitar
Tera Piano
Anthology
1985 (vol 1 and 2) - Ultra deep sampled instrument
1986 - Ultra deep sampled instrument
1987 - Ultra deep sampled instrument
1990 - Ultra deep sampled instrument
1993 (vol 1, 2 and 3) - Ultra deep sampled instrument
Anthology EP14 - Ultra deep sampled instrument Electric piano
Anthology Orchestra (vol 1, 2, 3 and 4) - Ultra deep sampled instrument Orchestra
Drums
Acoustic One
TR-606
TR-808
TR-909
Patches
Patches are available for a number of the Roland Cloud instruments. These are additional configurations of the instruments.
D-50 "Beyond Fantasia" Bank One
D-50 "Beyond Fantasia" Bank Two
Juno-106 Brothertiger
Juno-106 Dark Techno
Juno-106 New Tech
Juno-106 Synth-Pop by Espen Kraft
Juno-106 Synthwave
Juno-106 Techno
Jupiter-8 Brothertiger
Jupiter-8 Epic Jupiter
Jupiter-8 Synthwave
Jupiter-8 Techno
JV-1080 Cinematic Cyberpunk
JV-1080 Don Solaris Signature Collection
JV-1080 Widescreen Ambient
JX-3P Synthwave
JX-3P Synthwave
Promars Curiority Collection
SH-2 Brothertiger
SH-2 Space Aged
SH-101 Dark Dream Techno
SH-101 Techno
System-8 Frontiers
System-8 Modern System
System-8 Synthwave
System-100 Klang
TB-303 Rob Acid TB-303 Collection
TB-303 Techno
TR-808 Dark Techno
TR-808 Dynamix II
TR-808 Techno
TR-909 Dark Techno
TR-909 Techno
XV-5080 Sky House
References
Software synthesizers
Audio software
Music technology | Roland Cloud | Engineering | 738 |
61,019,998 | https://en.wikipedia.org/wiki/Aquatic%20Microbial%20Ecology | Aquatic Microbial Ecology is a monthly peer-reviewed scientific journal covering all aspects of aquatic microbial dynamics, in particular viruses, prokaryotes, and eukaryotes in marine, limnetic, and brackish habitats. The journal was originally established as Marine Microbial Food Webs by P. Bougis and F. Rassoulzadegan in 1985, and acquired its current name in 1995. The journal is currently published by Inter Research.
Abstracting and indexing
The journal is indexed and abstracted in:
References
External links
Microbiology journals
Ecology journals
Academic journals established in 1985
English-language journals
Monthly journals | Aquatic Microbial Ecology | Environmental_science | 130 |
52,346,275 | https://en.wikipedia.org/wiki/C15H15NO2 | {{DISPLAYTITLE:C15H15NO2}}
The molecular formula C15H15NO2 (molar mass: 241.285 g/mol, exact mass: 241.1103 u) may refer to:
Diphenylalanine
Mefenamic acid
Nafoxadol
Molecular formulas | C15H15NO2 | Physics,Chemistry | 68 |
8,137,454 | https://en.wikipedia.org/wiki/Weizmann%20Women%20%26%20Science%20Award | The Weizmann Women & Science Award is a biennial award established in 1994 to honor an outstanding woman scientist in the United States who has made significant contributions to the scientific community. The objective of the award, which includes a $25,000 research grant to the recipient, is to promote women in science, and to provide a strong role model to motivate and encourage the next generation of young women scientists.
The award was originally given by the American Committee for the Weizmann Institute of Science (ACWIS) and now it is awarded by the Weizmann Institute and the award ceremony takes place at the Weizmann Institute, located in the city of Rehovoth, Israel.
The Weizmann Institute is a center of basic interdisciplinary scientific research and graduate study, addressing crucial problems in technology, medicine and health, energy, agriculture and the environment.
Honorees
1994 Dr. Joan A. Steitz, a Henry Ford II Professor of Biophysics and Biochemistry at Yale University and an Investigator of the Howard Hughes Medical Institute.
1996 Dr. Vera Rubin, Observational Astronomer, Department of Terrestrial Magnetism, Carnegie Institution
1998 Dr. Jacqueline Barton, Arthur and Marian Hanisch Professor of Chemistry at the California Institute of Technology.
2000 Dr. Carla J. Shatz, Nathan Marsh Pusey Professor and Chair, Department of Neurobiology, Harvard Medical School
2000 Dr. Mildred Dresselhaus, Institute Professor of Electrical Engineering and Physics, Massachusetts Institute of Technology. Received the Millennial Lifetime Achievement Award
2002 Dr. Susan Solomon, Senior Scientist, Aeronomy Laboratory, National Oceanic and Atmospheric Administration
2004 Dr. May Berenbaum, Swanlund Professor; Head, Department of Entomology, University of Illinois at Urbana-Champaign
2006 Dr. Mary-Claire King, American Cancer Society Research Professor of Genome Sciences and Medicine, University of Washington, Seattle
2008 Dr. Elizabeth Blackburn, researcher at the University of California, San Francisco
2011 Dr. Catherine Bréchignac, President of the International Council for Science and former president of the CNRS ("National Centre for Scientific Research")
2013 Prof. Susan Gasser, Friedrich Miescher Institute for Biomedical Research, Switzerland
2015 Prof. Barbara Liskov computer scientist and Institute professor at the Massachusetts Institute of Technology (MIT)
2017 Prof. Ursula Keller and Prof. Naomi Halas
2019 Mina Bissell and Nieng Yan
See also
See also List of prizes, medals, and awards for women in science
References
External links
American Committee for the Weizmann Institute of Science
American science and technology awards
Science awards honoring women
Awards established in 1994
1994 establishments in the United States | Weizmann Women & Science Award | Technology | 525 |
34,727,404 | https://en.wikipedia.org/wiki/Group%20III%20pyridoxal-dependent%20decarboxylases | In molecular biology, group III pyridoxal-dependent decarboxylases are a family of bacterial enzymes comprising ornithine decarboxylase , lysine decarboxylase and arginine decarboxylase .
Pyridoxal-5'-phosphate-dependent amino acid decarboxylases can be divided into four groups based on amino acid sequence. Group III comprises prokaryotic ornithine and lysine decarboxylase and the prokaryotic biodegradative type of arginine decarboxylase.
Structure
These enzymes consist of several conserved domains.
The N-terminal domain has a flavodoxin-like fold, and is termed the "wing" domain because of its position in the overall 3D structure. Ornithine decarboxylase from Lactobacillus 30a (L30a OrnDC) is representative of the large, pyridoxal-5'-phosphate-dependent decarboxylases that act on lysine, arginine or ornithine. The crystal structure of the L30a OrnDC has been solved to 3.0 A resolution. Six dimers related by C6 symmetry compose the enzymatically active dodecamer (approximately 106 Da). Each monomer of L30a OrnDC can be described in terms of five sequential folding domains. The amino-terminal domain, residues 1 to 107, consists of a five-stranded beta-sheet termed the "wing" domain. Two wing domains of each dimer project inward towards the centre of the dodecamer and contribute to dodecamer stabilisation.
The major domain contains a conserved lysine residue, which is the site of attachment of the pyridoxal-phosphate group.
See also
Group I pyridoxal-dependent decarboxylases
Group II pyridoxal-dependent decarboxylases
Group IV pyridoxal-dependent decarboxylases
References
Protein domains | Group III pyridoxal-dependent decarboxylases | Biology | 429 |
428,795 | https://en.wikipedia.org/wiki/Check%20valve | A check valve, non-return valve, reflux valve, retention valve, foot valve, or one-way valve is a valve that normally allows fluid (liquid or gas) to flow through it in only one direction.
Check valves are two-port valves, meaning they have two openings in the body, one for fluid to enter and the other for fluid to leave. There are various types of check valves used in a wide variety of applications. Check valves are often part of common household items. Although they are available in a wide range of sizes and costs, check valves generally are very small, simple, and inexpensive. Check valves work automatically and most are not controlled by a person or any external control; accordingly, most do not have any valve handle or stem. The bodies (external shells) of most check valves are made of plastic or metal.
An important concept in check valves is the cracking pressure which is the minimum differential upstream pressure between inlet and outlet at which the valve will operate. Typically the check valve is designed for and can therefore be specified for a specific cracking pressure.
Technical terminology
Cracking pressure Refers to the minimum pressure differential needed between the inlet and outlet of the valve at which the first indication of flow occurs (steady stream of bubbles). Cracking pressure is also known as unseating head (pressure) or opening pressure.
Reseal pressure Refers to the pressure differential between the inlet and outlet of the valve during the closing process of the check valve, at which there is no visible leak rate. Reseal pressure is also known as sealing pressure, seating head (pressure) or closing pressure.
Back pressure a pressure higher at the outlet of a fitting than that at the inlet or a point upstream
Types
Ball check valve
A ball check valve is a check valve in which the closing member, the movable part to block the flow, is a ball. In some ball check valves, the ball is spring-loaded to help keep it shut. For those designs without a spring, reverse flow is required to move the ball toward the seat and create a seal. The interior surface of the main seats of ball check valves are more or less conically tapered to guide the ball into the seat and form a positive seal when stopping reverse flow.
Ball check valves are often very small, simple, and cheap. They are commonly used in liquid or gel minipump dispenser spigots, spray devices, some rubber bulbs for pumping air, etc., manual air pumps and some other pumps, and refillable dispensing syringes. Although the balls are most often made of metal, they can be made of other materials; in some specialized cases out of highly durable or inert materials, such as sapphire. High-performance liquid chromatography pumps and similar high pressure applications commonly use small inlet and outlet ball check valves with balls of (artificial) ruby and seats made of sapphire or both ball and seat of ruby, for both hardness and chemical resistance. After prolonged use, such check valves can eventually wear out or the seat can develop a crack, requiring replacement. Therefore, such valves are made to be replaceable, sometimes placed in a small plastic body tightly fitted inside a metal fitting which can withstand high pressure and which is screwed into the pump head.
There are similar check valves where the disc is not a ball, but some other shape, such as a poppet energized by a spring. Ball check valves should not be confused with ball valves, which are a different type of valve in which a ball rotating on a pin acts as a controllable rotor to stop or direct flow.
Diaphragm check valve
A diaphragm check valve uses a flexing rubber diaphragm positioned to create a normally-closed valve. Pressure on the upstream side must be greater than the pressure on the downstream side by a certain amount, known as the pressure differential, for the check valve to open allowing flow. Once positive pressure stops, the diaphragm automatically flexes back to its original closed position. This type is used in respirators (face masks) with an exhalation valve.
Swing check valve
A swing check valve (or tilting disc check valve) is a check valve in which the disc, the movable part to block the flow, swings on a hinge or trunnion, either onto the seat to block reverse flow or off the seat to allow forward flow. The seat opening cross-section may be perpendicular to the centerline between the two ports or at an angle. Although swing check valves can come in various sizes, large check valves are often swing check valves. A common issue caused by swing check valves is known as water hammer. This can occur when the swing check closes and the flow abruptly stops, causing a surge of pressure resulting in high velocity shock waves that act against the piping and valves, placing large stress on the metals and vibrations in the system. Undetected, water hammer can rupture pumps, valves, and pipes within the system.
The flapper valve in a flush-toilet mechanism is an example of this type of valve. Tank pressure holding it closed is overcome by manual lift of the flapper. It then remains open until the tank drains and the flapper falls due to gravity. Another variation of this mechanism is the clapper valve, used in applications such firefighting and fire life safety systems. A hinged gate only remains open in the inflowing direction. The clapper valve often also has a spring that keeps the gate shut when there is no forward pressure. Another example is the backwater valve (for sanitary drainage system) that protects against flooding caused by return flow of sewage waters. Such risk occurs most often in sanitary drainage systems connected to combined sewerage systems and in rainwater drainage systems. It may be caused by intense rainfall, thaw or flood.
Butterfly check valve
A butterfly check valve is a variant on the swing check valve, having two hinged flaps which act as check valves to prevent backwards flow. It should not be confused with the similarly named butterfly valve, which is used for flow regulation and does not have a one-way flow function.
Stop-check valve
A stop-check valve is a check valve with override control to stop flow regardless of flow direction or pressure. In addition to closing in response to backflow or insufficient forward pressure (normal check-valve behavior), it can also be deliberately shut by an external mechanism, thereby preventing any flow regardless of forward pressure.
Lift-check valve
A lift-check valve is a check valve in which the disc, sometimes called a lift, can be lifted up off its seat by higher pressure of inlet or upstream fluid to allow flow to the outlet or downstream side. A guide keeps motion of the disc on a vertical line, so the valve can later reseat properly. When the pressure is no longer higher, gravity or higher downstream pressure will cause the disc to lower onto its seat, shutting the valve to stop reverse flow.
In-line check valve
An in-line check valve is a check valve similar to the lift check valve. However, this valve generally has a spring that will 'lift' when there is pressure on the upstream side of the valve. The pressure needed on the upstream side of the valve to overcome the spring tension is called the 'cracking pressure'. When the pressure going through the valve goes below the cracking pressure, the spring will close the valve to prevent back-flow in the process.
Duckbill valve
A duckbill valve is a check valve in which flow proceeds through a soft tube that protrudes into the downstream side. Back-pressure collapses this tube, cutting off flow.
Pneumatic non-return valve
Pneumatic non-return valves provide the ability to lock the valve, hence preventing flow in either direction. This may be used if for example a site with hazardous materials should be protected from flood water, however it is also important that the materials can’t leak, for example during transfer between vessels.
Reed valve
A reed valve is a check valve formed by a flexible flat sheet that seals an orifice plate. The cracking pressure is very low, the moving part has low mass allowing rapid operation, the flow resistance is moderate, and the seal improves with back pressure. These are commonly found in two stroke internal combustion engines as the air intake valve for the crankcase volume and in air compressors as both intake and exhaust valves for the cylinder(s). Although reed valves are typically used for gasses rather than liquids, the Autotrol brand of water treatment control valves are designed as a set of reed valves taking advantage of the sealing characteristic, selectively forcing open some of the reeds to establish a flow path.
Flow check
A flow check is a check valve used in hydronic heating and cooling systems to prevent unwanted passive gravity flow. A flow check is a simple flow lifted gravity closed heavy metal stopper designed for low flow resistance, many decades of continuous service, and to self-clean the fine particulates commonly found in hydronic systems from the sealing surfaces. To accomplish self cleaning, the stopper is typically not conical. A circular recess in a weight that fits over a matching narrow ridge at the rim of an orifice is a common design. The application inherently tolerates a modest reverse leakage rate, a perfect seal is not required. A flow check has an operating screw to allow the valve to be held open, the opposite of the control on a stop-check valve, as an aide for filling the system and for purging air from the system.
Multiple valves
Multiple check valves can be connected in series. For example, a double check valve is often used as a backflow prevention device to keep potentially contaminated water from siphoning back into municipal water supply lines. There are also double ball check valves in which there are two ball/seat combinations sequentially in the same body to ensure positive leak-tight shutoff when blocking reverse flow; and piston check valves, wafer check valves, and ball-and-cone check valves.
Applications
Pumps
Check valves are often used with some types of pumps. Piston-driven and diaphragm pumps such as metering pumps and pumps for chromatography commonly use inlet and outlet ball check valves. These valves often look like small cylinders attached to the pump head on the inlet and outlet lines. Many similar pump-like mechanisms for moving volumes of fluids around use check valves such as ball check valves. The feed pumps or injectors which supply water to steam boilers are fitted with check valves to prevent back-flow.
Check valves are also used in the pumps that supply water to water slides. The water to the slide flows through a pipe which doubles as the tower holding the steps to the slide. When the facility with the slide closes for the night, the check valve stops the flow of water through the pipe; when the facility reopens for the next day, the valve is opened and the flow restarts, making the slide ready for use again.
Industrial processes
Check valves are used in many fluid systems such as those in chemical and power plants, and in many other industrial processes.
Typical applications in the nuclear industry are feed water control systems, dump lines, make-up water, miscellaneous process systems, N2 systems, and monitoring and sampling systems. In aircraft and aerospace, check valves are used where high vibration, large temperature extremes and corrosive fluids are present. For example, spacecraft and launch vehicle propulsion propellant control for reaction control systems (RCS) and Attitude Control Systems (ACS) and aircraft hydraulic systems.
Check valves are also often used when multiple gases are mixed into one gas stream. A check valve is installed on each of the individual gas streams to prevent mixing of the gases in the original source. For example, if a fuel and an oxidizer are to be mixed, then check valves will normally be used on both the fuel and oxidizer sources to ensure that the original gas cylinders remain pure and therefore nonflammable.
In 2010, NASA's Jet Propulsion Laboratory slightly modified a simple check valve design with the intention to store liquid samples indicative to life on Mars in separate reservoirs of the device without fear of cross contamination.
Domestic use
When a sanitary potable water supply is plumbed to an unsanitary system, for example lawn sprinklers, a dish washer or a washing machine, a check valve called a backflow preventer is used to prevent contaminated water from re-entering the domestic water supply.
Some types of irrigation sprinklers and drip irrigation emitters have small check valves built into them to keep the lines from draining when the system is shut off.
Check valves used in domestic heating systems to prevent vertical convection, especially in combination with solar thermal installations, also are called gravity brakes.
Rainwater harvesting systems that are plumbed into the main water supply of a utility provider may be required to have one or more check valves fitted to prevent contamination of the primary supply by rainwater.
Hydraulic jacks use ball check valves to build pressure on the lifting side of the jack.
Check valves are commonly used in inflatables, such as toys, mattresses and boats. This allows the object to be inflated without continuous or uninterrupted air pressure.
History
Frank P. Cotter developed a "simple self sealing check valve, adapted to be connected in the pipe connections without requiring special fittings and which may be readily opened for inspection or repair" 1907 (U.S. patent 865,631).
Nikola Tesla invented a deceptively simple one-way valve for fluids in 1916, called a Tesla valve. It was patented in 1920 (U.S. patent 1,329,559).
Images
See also
Diode, the electrical analog of a check valve
Top feed
Vacuum breaker
Reed valve
Ball valve
Butterfly valve
Control valve
Gate valve
Globe valve
Diaphragm valve
Needle valve
Tesla valve
References
External links
Working Principle of Spring Check Valves
Check Valves Tutorial The operation, benefits, applications and selection of different designs, including lift, disc, swing and wafer check valves are explained in this tutorial
A picture of a microscopic checkvalve, a scaled down version of Tesla's original fluidic diode.
, Tesla's original fluidic diode (a test of a design showing very poor performance – n.b. the test protocol did not match the conditions described in the patent)
Check Valve Installation and Benefits
Plumbing valves
Steam boiler components
Firefighting equipment
Valves | Check valve | Physics,Chemistry | 2,936 |
64,322,655 | https://en.wikipedia.org/wiki/SILAM | SILAM (System for Integrated Modeling of Atmospheric Composition) is a global-to-meso-scale atmospheric dispersion model developed by the Finnish Meteorological Institute (FMI).
Model
It provides information on atmospheric composition, air quality, and wildfire smoke (PM2.5) and is also able to solve the inverse dispersion problem. It can take data from a variety of sources, including natural ones such as sea salt, blown dust, and pollen.
The FMI provides three datasets based on SILAM: a 4-day global air pollutant (SO2, NO, NO2, O3, PM2.5, and PM10) forecast based on TNO-MACC (global emission) and IS4FIRES (wildfire), a 5-day global wildfire smoke forecast based on IS4FIRES, and a 5-day pollen forecast for Europe.
References
Atmospheric dispersion modeling
Air pollution | SILAM | Chemistry,Engineering,Environmental_science | 194 |
76,762,925 | https://en.wikipedia.org/wiki/PGC%201470080 | PGC 1470080 is a type E elliptical galaxy located in the Boötes constellation. It is located 3 billion light-years away from the Solar System and has a diameter of 571,000 light-years, making it a type-cD galaxy and one of the largest.
Characteristics
It is the brightest cluster galaxy of the galaxy cluster, WHL J143845.0+145412. The galaxy acts as a gravitational lens for a much more distant spiral galaxy which is called SGAS J143845+145407. This creates a mirror image of the galaxy thus creating a masterpiece.
Such of this phenomenon occurs, when a massive celestial body such as a galaxy cluster which creates sufficient curvature of spacetime for the path of light to be bent by the lens. This creates multiple images of the original galaxy which as seen, the background object appears as a distorted arc or a ring.
This observation takes advantage of gravitational lensing to peer through early universe galaxies. It helps to reveal details of distant galaxies that is unobtainable and allowing astronomers to determine star formation in such early galaxies. Not to mention, it gives scientists a better insight on how evolution of galaxies have unfolded. By using gravitational lensing is also a very useful tool which contributes significant new results in areas as different as the cosmological distance scale, dark matter in halos and galaxy structures.
According to the Hubble image of PGC 1470080, it is shown to be a peculiar lenticular galaxy rather than an elliptical galaxy as expected.
References
Boötes
Principal Galaxies Catalogue objects
LEDA objects
2MASS objects
SDSS objects
Elliptical galaxies | PGC 1470080 | Astronomy | 330 |
2,332,266 | https://en.wikipedia.org/wiki/Iodine%20heptafluoride | Iodine heptafluoride is an interhalogen compound with the chemical formula IF7. It has an unusual pentagonal bipyramidal structure, with D5h symmetry, as predicted by VSEPR theory. The molecule can undergo a pseudorotational rearrangement called the Bartell mechanism, which is like the Berry mechanism but for a heptacoordinated system.
Below 4.5 °C, IF7 forms a snow-white powder of colorless crystals, melting at 5-6 °C. However, this melting is difficult to observe, as the liquid form is thermodynamically unstable at 760 mmHg: instead, the compound begins to sublime at 4.77 °C. The dense vapor has a mouldy, acrid odour.
Preparation
IF7 is prepared by passing F2 through liquid IF5 at 90 °C, then heating the vapours to 270 °C. Alternatively, this compound can be prepared from fluorine and dried palladium or potassium iodide to minimize the formation of IOF5, an impurity arising by hydrolysis. Iodine heptafluoride is also produced as a by-product when dioxygenyl hexafluoroplatinate is used to prepare other platinum(V) compounds such as potassium hexafluoroplatinate(V), using potassium fluoride in iodine pentafluoride solution:
2 O2PtF6 + 2 KF + IF5 → 2 KPtF6 + 2 O2 + IF7
Reactions
Iodine heptafluoride decomposes at 200 °C to fluorine gas and iodine pentafluoride.
Safety considerations
IF7 is highly irritating to both the skin and the mucous membranes. It also is a strong oxidizer and can cause fire on contact with organic material.
References
Common sources
External links
WebBook page for IF7
National Pollutant Inventory - Fluoride and compounds fact sheet
web elements listing
Fluorides
Iodine compounds
Interhalogen compounds
Oxidizing agents
Hypervalent molecules | Iodine heptafluoride | Physics,Chemistry | 427 |
59,508,090 | https://en.wikipedia.org/wiki/Washington%20Glass%20School | The Washington Glass School was founded in 2001 by Washington, DC area artists Tim Tate and Erwin Timmers.
The school teaches classes on how to make kiln cast, fused, and cold worked glass sculptures and art. It is the second largest warm glass school in the United States.
History
Co-Founder Tim Tate's glass sculpture at the 2000 Artomatic art event was acquired by the Smithsonian American Art Museum for the Renwick Gallery's permanent collection. That sale also provided the funds that started the Washington Glass School. Erwin Timmers' artwork was also on exhibit at Artomatic, where after the show, they began to collaborate, later teaming up to start the Washington Glass School & Studio. Michael Janis joined the school in 2003, and became a Co-Director of the Washington Glass School in 2005.
The school was initially located in the neighborhood where Nationals Park now stands, and as a result of the construction of the park, had to relocate to the current location in Mount Rainier, Maryland, just over the border with Washington, D.C.
In 2008, Artomatic organized an exhibit that focused on how three "glass" cities approach the sculptural medium and hosted by the Washington Glass School. The collaborative show was titled "Glass 3″ referencing the invited glass centers of Washington, D.C., Toledo, Ohio, and Sunderland, England.
The exhibit featured nearly 50 glass artists and created an international partnership and strong relationships that led to more international collaborative interactions. Tim Tate and Michael Janis' Fulbright Scholarships were both completed at the University of Sunderland and the UK's National Glass Centre.
Washington Glass Studio
The Washington Glass Studio was established as part of the school in 2001 to create site specific art for architectural and landscape environments. The studio draws on the Washington Glass School Co-director's educational backgrounds in steel and glass sculpture, electronics and video media, architectural design, and ecological sustainability.
Notable public art projects by Washington Glass Studio include the monumental glass doors for the John Adams Building at the Library of Congress. Under the auspices of the Architect of the Capitol, the bronze doors to the John Adams Building were replaced in 2013 with code-complaint sculpted glass panels mirroring the original bronze door sculptures by American artist, Lee Lawrie, designed to commemorate the history of the written word, depicting gods of writing as well as real-life Native American Sequoyah. "
The public art commission for artwork at the entrance to the Laurel Branch Library was awarded to the Washington Glass Studio in 2016. The high glass-and-steel sculpture was made involving the surrounding community and library groups. In a series of glass-making workshops, images of books and stories, education and learning, and shared aspirations were created at the Washington Glass School to be incorporated into the internally illuminated tower. In 2023, a second piece of public art for the Prince George's County Memorial Library system, "Reading the Waters," a fused glass mural, was installed at the Bladensburg Branch Library as part of the facility's renovation.
Faculty
Directors
Michael Janis
Tim Tate
Erwin Timmers
Glass Secessionism
The Washington Glass School championed a new art movement dubbed Glass Secessionism to "underscore and define the 21st Century Sculptural Glass Movement and to illustrate the differences and strengths compared to late 20th century technique-driven glass. While the 20th century glass artists contributions have been spectacular and ground breaking, this group focuses on the aesthetic of the 21st century. The object of the Glass-Secession is to advance glass as applied to sculptural expression; to draw together those glass artists practicing or otherwise interested in the arts, and to discuss from time to time examples of the Glass-Secession or other narrative work."
Reflecting the evolving nature of glass art, the name of the Facebook group was amended in 2017 to "21st Century Glass : Conversations and Images / Glass Secessionism".
References
External links
"Capitol Improvements", American Craft Magazine reviews the process in the school's creation of the new cast glass doors for the US Library of Congress Adams Building. June/July 2013.
"All Things Considered - Interview with Tim Tate: A Tiny Digital Arts Revolution, Encased In Glass." National Public Radio. August 3, 2009.
WETA TV - "Around Town Visits the Washington Glass School." Aired July 16, 2007.
Glassmaking schools | Washington Glass School | Materials_science,Engineering | 879 |
71,645,948 | https://en.wikipedia.org/wiki/Code%20property%20graph | In computer science, a code property graph (CPG) is a computer program representation that captures syntactic structure, control flow, and data dependencies in a property graph. The concept was originally introduced to identify security vulnerabilities in C and C++ system code, but has since been employed to analyze web applications, cloud deployments, and smart contracts. Beyond vulnerability discovery, code property graphs find applications in code clone detection, attack-surface detection, exploit generation, measuring code testability, and backporting of security patches.
Definition
A code property graph of a program is a graph representation of the program obtained by merging its abstract syntax trees (AST), control-flow graphs (CFG) and program dependence graphs (PDG) at statement and predicate nodes. The resulting graph is a property graph, which is the underlying graph model of graph databases such as Neo4j, JanusGraph and OrientDB where data is stored in the nodes and edges as key-value pairs. In effect, code property graphs can be stored in graph databases and queried using graph query languages.
Example
Consider the function of a C program:
void foo() {
int x = source();
if (x < MAX) {
int y = 2 * x;
sink(y);
}
}
The code property graph of the function is obtained by merging its abstract syntax tree, control-flow graph, and program dependence graph at statements and predicates as seen in the following figure:
Implementations
Joern CPG. The original code property graph was implemented for C/C++ in 2013 at University of Göttingen as part of the open-source code analysis tool Joern. This original version has been discontinued and superseded by the open-source Joern Project, which provides a formal code property graph specification applicable to multiple programming languages. The project provides code property graph generators for C/C++, Java, Java bytecode, Kotlin, Python, JavaScript, TypeScript, LLVM bitcode, and x86 binaries (via the Ghidra disassembler).
Plume CPG. Developed at Stellenbosch University in 2020 and sponsored by Amazon Science, the open-source Plume project provides a code property graph for Java bytecode compatible with the code property graph specification provided by the Joern project. The two projects merged in 2021.
Fraunhofer AISEC CPG. The provides open-source code property graph generators for C/C++, Java, Golang, Python, TypeScript and LLVM-IR. It also includes a formal specification of the graph and its various node types. Furthermore, it provides the Cloud Property Graph, an extension of the code property graph concept that models details of cloud deployments.
Galois’ CPG for LLVM. Galois Inc. provides a code property graph based on the LLVM compiler. The graph represents code at different stages of the compilation and a mapping between these representations. It follows a custom schema that is defined in its documentation.
Machine learning on code property graphs
Code property graphs provide the basis for several machine-learning-based approaches to vulnerability discovery. In particular, graph neural networks (GNN) have been employed to derive vulnerability detectors.
See also
Abstract syntax tree (AST)
Control-flow graph (CFG)
Program dependence graph (PDG)
Graph database
References
Computer security software
Application-specific graphs | Code property graph | Engineering | 701 |
3,083,335 | https://en.wikipedia.org/wiki/Otoscope | An otoscope or auriscope is a medical device used by healthcare professionals to examine the ear canal and eardrum. This may be done as part of routine physical examinations, or for evaluating specific ear complaints, such as earaches, sense of fullness in the ear, or hearing loss.
Usage
Function
An otoscope enables viewing and examination of the ear canal and tympanic membrane (eardrum). Otoscopic examination can help diagnose conditions such as acute otitis media (infection of the middle ear), traumatic perforation of the eardrum, and cholesteatoma.
The presence of cerumen (earwax), shed skin, pus, canal skin edema, foreign bodies, and various ear diseases, can obscure the view of the eardrum and thus compromise the value of otoscopy done with a common otoscope, but can confirm the presence of obstructing symptoms.
Otoscopes can also be used to examine patients' noses (avoiding the need for a separate nasal speculum) and upper throats (by removing the speculum).
Method of use
The most common otoscopes consist of a handle and a head. The head contains a light source and a magnifying lens, to help illuminate and enlarge ear structures. The distal (front) end of the otoscope has an attachment for disposable plastic ear specula.
The examiner first pulls on the pinna (usually the earlobe, side or top) to straighten the ear canal, and then inserts the ear speculum side of the otoscope into the outer ear. It is important to brace the index or little finger of the hand holding the otoscope against the patient's head to avoid injuring the ear canal. The examiner then looks through the lens on the rear of the instrument to see inside the ear canal.
In many models, the examiner can remove the lens and insert instruments like specialized suction tips through the otoscope into the ear canal, such as for removing earwax. Most models also have an insertion point for a bulb that pushes air through the speculum (pneumatic otoscopy) for testing eardrum mobility.
Types
Many otoscopes for doctors' offices are wall-mounted, with an electrical cord providing power from an electric outlet. Portable otoscopes powered by batteries (usually rechargeable) in the handle are also available.
Otoscopes are often sold with ophthalmoscopes as a diagnostic set.
Monocular and binocular
Most otoscopes used in emergency rooms, pediatric offices, general practice, and by internists are monocular devices. These provide a two-dimensional view of the ear canal and its contents, and usually at least a portion of the eardrum.
Another method of performing otoscopy (visualization of the ear) is by using a binocular (two-eyed) microscope in conjunction with a larger plastic or metal ear speculum, which provides a much larger field of view. The microscope is suspended from a stand, which frees up both of the examiner's hands; the patient is placed in a supine position and their head is tilted, which keeps the head stable and enables better lighting. The binocular view enables depth perception, which makes removal of earwax or other obstructing materials easier and less hazardous. The microscope also has up to 40× magnification, allowing more detailed viewing of the entire ear canal, and of the entire eardrum (unless prevented by edema of the canal skin). Subtle changes in the anatomy can also be more easily detected and interpreted.
Traditionally, binocular microscopes are only used by otolaryngologists (ear, nose, and throat specialists) and otologists (subspecialty ear doctors). Their widespread adoption in general medicine is hindered by cost and lack of familiarity among pediatric and general medicine professors in physician training programs. Studies have shown that reliance on a monocular otoscope to diagnose ear disease results in a more than 50% chance of misdiagnosis, as compared to binocular microscopic otoscopy.
Pneumatic otoscope
The pneumatic otoscope is used to examine the eardrum for assessing the health of the middle ear. This is done by assessing the eardrum's contour (normal, retracted, full, or bulging), its color (gray, yellow, pink, amber, white, red, or blue), its translucency (translucent, semi-opaque, opaque), and its mobility (normal, increased, decreased, or absent). The pneumatic otoscope is the standard tool used in diagnosing otitis media (infection of the middle ear).
The pneumatic otoscope has a pneumatic (diagnostic) head, which contains a lens, an enclosed light source, and a nipple for attaching a rubber bulb and tubing. By gently squeezing and releasing the bulb in rapid succession, the degree of eardrum mobility in response to positive and negative pressure can be observed. The head is designed so that an airtight chamber is produced when a speculum is attached and fitted snugly into the patient's ear canal. Using a rubber-tipped speculum or adding a small sleeve of rubber tubing at the end of a plastic speculum, can help improve the airtight seal and also help avoid injuring the patient.
By replacing the pneumatic head with a surgical head, the pneumatic otoscope can also be used to clear earwax from the ear canal, and to perform diagnostic tympanocentesis (drainage of fluid from the middle ear) or myringotomy (creation of incision in the eardrum). The surgical head consists of an unenclosed light source and a lens that can swivel over a wide arc.
See also
References
External links
Phisick – Pictures and information about antique otoscopes
Ear procedures
Endoscopes
Medical equipment
French inventions | Otoscope | Biology | 1,256 |
34,505,584 | https://en.wikipedia.org/wiki/Anders%20Lindquist | Anders Gunnar Lindquist (born 21 November 1942) is a Swedish applied mathematician and control theorist. He has made contributions to the theory of partial realization, stochastic modeling, estimation and control, and moment problems in systems and control. In particular, he is known for the discovery of the fast filtering algorithms for (discrete-time) Kalman filtering in the early 1970s, and his seminal work on the separation principle of stochastic optimal control and, in collaborations with Giorgio Picci, the Geometric Theory for Stochastic Realization.
Together with late Christopher I. Byrnes (dean of the School of Engineering & Applied Science at Washington University in St. Louis from 1991 to 2006) and Tryphon T. Georgiou (Vincentine Hermes-Luh Chair in Electrical Engineering at the University of Minnesota), he is one of the founder of the so-called Byrnes-Georgiou-Lindquist school. They pioneered a new moment-based approach for the solution of control and estimation problems with complexity constraints.
He has been Professor in three continents: America (University of Kentucky, USA), Europe (Royal Institute of Technology, Sweden) and Asia (Shanghai Jiao Tong University, China).
Biography
Lindquist was born in Lund, Sweden. He received his PhD degree from KTH Royal Institute of Technology in Stockholm under the supervision of Lars Erik Zachrisson, and was appointed a Docent of Optimization and Systems Theory in 1972. Subsequently, he held visiting positions at the University of Florida, Brown University, and the State University of New York at Albany, until 1974, when he joined the faculty of Mathematics at the University of Kentucky. He remained at Kentucky until 1983 at which time he returned to the Royal Institute of Technology as a Professor and the Chair of Optimization and Systems Theory.
Over the years, Lindquist has held visiting and affiliate positions at the Washington University in St. Louis, the University of Padova, Consiglio Nazionale delle Ricerche, Arizona State University, the International Institute of Applied Systems Analysis in Vienna, the Russian Academy of Sciences in Moscow, East China Normal University in Shanghai, the Technion in Haifa, the University of California at Berkeley, and the University of Kyoto. He was the Head of the Mathematics Department at the Royal Institute of Technology from 2000 until 2009. Between 2006 and 2014 he was the Director of the Strategic Research Center for Industrial and Applied Mathematics (CIAM) at KTH. In 2011 he was appointed Zhiyuan Chair Professor and Qian Ren Scholar at Shanghai Jiao Tong University.
Lindquist is a member of the Royal Swedish Academy of Engineering Sciences (IVA), a Foreign Member of the Chinese Academy of Sciences (2015), a Member of the Academia Europaea (Academy of Europe), an Honorary Member of Hungarian Operations Research Society, and a Foreign Member of Russian Academy of Natural Sciences. He is a Life Fellow of the IEEE, a Fellow of the Society for Industrial and Applied Mathematics and a Fellow of the International Federation of Automatic Control. He was awarded the SIGEST of the SIAM Review (2001) and the George S. Axelby Award of the IEEE Control Systems Society (2003). He was the Zaborszky Distinguished Lecturer in 2000 and the Distinguished Israel Pollak Lecturer in 2005 and 2006. He received the W. T. and Idalia Reid Prize in Mathematics in 2009 for his "fundamental contributions to the theory of stochastic systems, signals, and control" and an Honorary Doctorate (Doctor Scientiarum Honoris Causa) from The Technion in 2010.
He is the recipient of the 2020 IEEE Field Medal in Systems and Control, the IEEE Control Systems Award.
Anders Lindquist is a Knight Commander with Star of the Order of the Holy Sepulchre.
Selection of publications
A. Lindquist, On feedback control of linear stochastic systems, SIAM J.Control, 11 (May 1973), 323–343.
A. Lindquist, "A new algorithm for optimal filtering of discrete-time stationary processes," SIAM J. Control 12 (November 1974) 736–746.
A. Lindquist with G. Picci, On the stochastic realization problem, SIAM J. Control and Optimization, 17 (1979), 365–389.
W.B. Gragg and A. Lindquist, On the partial realization problem, Linear Algebra and Appl.50 (1983), 277–319.
A. Lindquist and G. Picci, Realization theory for multivariate stationary Gaussian processes, SIAM J. Control and Optimization 23 (1985), 809–857.
C. I. Byrnes, A. Lindquist, S. V. Gusev and A. S. Matveev, A complete parameterization of all positive rational extensions of a covariance sequence, IEEE Transactions on Automatic Control AC-40 (1995), 1841–1857.
A. Lindquist and V.A. Yakubovich, Optimal damping of forced oscillations in discrete-time systems, IEEE Transactions on Automatic Control AC-42 (1997), 786–802.
C. I. Byrnes, T. T. Georgiou and A. Lindquist, A new approach to spectral estimation: A tunable high-resolution spectral estimator, IEEE Trans. Signal Process. SP-49 (2000), 3189–3205.
C. I. Byrnes, T. T. Georgiou and A. Lindquist, A generalized entropy criterion for Nevanlinna-Pick interpolation with degree constraint, IEEE Transactions on Automatic Control AC-46 (2001), 822–839.
C. I. Byrnes, S. V. Gusev and Lindquist, From finite covariance windows to modeling filters: A convex optimization approach, SIAM Review 43 (December 2001), 645–675.
C. I. Byrnes, T. T. Georgiou, A. Lindquist and A. Megretski, Generalized interpolation in H-infinity with a complexity constraint, Trans. American Mathematical Society 358 (2006), no. 3, pp. 965–987.
T.T. Georgiou and A. Lindquist, The separation principle in stochastic control, redux, IEEE Transactions on Automatic Control 58 (October 2013), 2481–2494.
A. Lindquist and G. Picci, The circulant rational covariance extension problem: the complete solution, IEEE Transactions on Automatic Control 58 (November 2013), 2848–2861.
J. Karlsson, A. Lindquist and A. Ringh, The multidimensional moment problem with complexity constraint, Integral Equations and Operator Theory, 2015.
A. Lindquist and G. Picci, Linear Stochastic Systems: A Geometric Approach to Modeling, Estimation and Identification, Series in Contemporary Mathematics, Vol.1, Springer Berlin Heidelberg, 2015.
References
External links
Faculty page, the Royal Institute of Technology
Anders Lindquist, the Mathematics Genealogy Project
1942 births
Control theorists
Swedish electrical engineers
Members of the Royal Swedish Academy of Engineering Sciences
Members of Academia Europaea
Foreign members of the Chinese Academy of Sciences
Academic staff of the KTH Royal Institute of Technology
Living people
People from Lund
KTH Royal Institute of Technology alumni
University of Kentucky faculty
Fellows of the Society for Industrial and Applied Mathematics
Fellows of the IEEE
Fellows of the International Federation of Automatic Control
Washington University in St. Louis mathematicians
National Research Council (Italy) people | Anders Lindquist | Engineering | 1,557 |
4,829,062 | https://en.wikipedia.org/wiki/Ethylhexyl%20palmitate | Ethylhexyl palmitate, also known as octyl palmitate, is the fatty acid ester derived from 2-ethylhexanol and palmitic acid. It is frequently utilized in cosmetic formulations.
Chemical structure
Ethylhexyl palmitate is a branched saturated fatty ester derived from ethylhexyl alcohol and palmitic acid.
Physical properties
Ethylhexyl palmitate is a clear, colorless liquid with a slightly fatty odor at room temperature.
The ester is synthesized by reacting palmitic acid and 2-ethylhexanol in the presence of an acid catalyst.
Uses
Ethylhexyl palmitate is used in cosmetic formulations as a solvent, carrying agent, pigment wetting agent, fragrance fixative and emollient. Its dry-slip skinfeel is similar to some silicone derivatives.
References
Cosmetics chemicals
Fatty acid esters
Lipids
Palmitate esters
2-Ethylhexyl esters | Ethylhexyl palmitate | Chemistry | 200 |
21,353,318 | https://en.wikipedia.org/wiki/HD%207924%20b | HD 7924 b is an extrasolar planet located approximately 55 light years away in the constellation of Cassiopeia, orbiting the 7th magnitude K-type main sequence (slightly metal poor) star HD 7924. It was published on January 28, 2009 and is the second planet discovered in the constellation Cassiopeia. Two additional planets in this system were discovered in 2015.
Super-Earth
HD 7924 b is a super-Earth exoplanet with a minimum mass 9.2 times that of Earth and takes only about 129.5 hours to orbit the star, at an average distance of . When HD 7924 b was discovered in 2009, it was one of only eight planets known with a minimum mass less than 10 Earths. Also a super-Earth discovery makes 2009 the fifth year in a row since 2005 to have super-Earth planets discovered.
Characteristics
While the radius of HD 7924 b is unknown, depending on its composition, it will be between 1.4–6 times the diameter of the Earth. It is unknown whether this planet is rocky or gaseous. Since the true mass of this planet is not known, it might be gaseous if the true mass is considerably more than minimum mass. If the true mass is near the minimum mass of 9.2 ME, then this planet could be rocky.
References
External links
Exoplanets discovered in 2009
Terrestrial planets
Super-Earths
Cassiopeia (constellation)
Exoplanets detected by radial velocity | HD 7924 b | Astronomy | 301 |
653,060 | https://en.wikipedia.org/wiki/Methyl%20ethyl%20ketone%20peroxide | Methyl ethyl ketone peroxide (MEKP) is an organic peroxide with the formula [(CH3)(C2H5)C(O2H)]2O2. MEKP is a colorless oily liquid. It is widely used in vulcanization (crosslinking) of polymers.
It is derived from the reaction of methyl ethyl ketone and hydrogen peroxide under acidic conditions. Several products result from this reaction including a cyclic dimer. The linear dimer, the topic of this article, is the most prevalent. and this is the form that is typically quoted in the commercially available material.
Solutions of 30 to 40% MEKP are used in industry and by hobbyists as catalyst to initiate the crosslinking of unsaturated polyester resins used in fiberglass, and casting. For this application, MEKP often is dissolved in a phlegmatizer such as dimethyl phthalate, cyclohexane peroxide, or to reduce sensitivity to shock. Benzoyl peroxide can be used for the same purpose.
Safety
Whereas acetone peroxide is a white powder at STP, MEKP is slightly less sensitive to shock and temperature, and more stable in storage.
MEKP is a severe skin irritant and can cause progressive corrosive damage or blindness.
The volatile decomposition products of MEKP can contribute to the formation of vapor-phase explosions. Ensuring safe storage is important, and the maximum storage temperature should be limited to below 30 °C.
Notes
External links
CDC - NIOSH Pocket Guide to Chemical Hazards
The Register: Mass murder in the skies: was the plot feasible?
New York Times: Details Emerge in British Terror Case
The Free Information Society: HMTD Synthesis
How MEKP cures Unsaturated Polyester Resin (video animation)
Liquid explosives
Ketals
Organic peroxides
Radical initiators
Organic peroxide explosives | Methyl ethyl ketone peroxide | Chemistry,Materials_science | 397 |
37,486,268 | https://en.wikipedia.org/wiki/Palo%20Alto%20Networks | Palo Alto Networks, Inc. is an American multinational cybersecurity company with headquarters in Santa Clara, California. The core product is a platform that includes advanced firewalls and cloud-based offerings that extend those firewalls to cover other aspects of security. The company serves over 70,000 organizations in over 150 countries, including 85 of the Fortune 100. It is home to the Unit 42 threat research team and hosts the Ignite cybersecurity conference. It is a partner organization of the World Economic Forum.
In June 2018, former Google and SoftBank executive Nikesh Arora joined the company as Chairman and CEO.
History
Palo Alto Networks was founded in 2005 by Nir Zuk, a former engineer from Check Point and NetScreen Technologies. Zuk, an Israeli native, began working with computers during his mandatory military service in the Israeli Defense Forces in the early 1990s.
The company debuted on the NYSE on July 20, 2012, raising $260 million with its initial public offering, which was the 4th-largest tech IPO of 2012. It remained on the NYSE until October 2021 when the company transferred its listing to Nasdaq.
In 2014, Palo Alto Networks founded the Cyber Threat Alliance with Fortinet, McAfee, and NortonLifeLock, a not-for-profit organization with the goal of improving cybersecurity "for the greater good" by encouraging cybersecurity organizations to collaborate by sharing cyber threat intelligence among members. By 2018, the organization had 20 members including Cisco, Check Point, Juniper Networks, and Sophos.
In 2018, the company began opening cybersecurity training facilities around the world as part of the Global Cyber Range Initiative.
In May 2018, the company announced Application Framework, an open cloud-delivered ecosystem where developers can publish security services as SaaS applications that can be instantly delivered to customers.
In 2019, the company announced the K2-Series, a 5G-ready next-generation firewall developed for service providers with 5G and IoT requirements. In February 2019, the company announced Cortex, an AI-based continuous security platform.
Acquisitions
January 2014: Morta Security
April 2014: Cyvera for approximately $200 million
May 2015: CirroSecure
March 2017: LightCyber for approximately $100 million
March 2018: Cloud Security company Evident.io for $300 million. This acquisition created the Prisma Cloud division.
April 2018: Secdo
October 2018: RedLock for $173 million
February 2019: Demisto for $560 million
May 2019: Twistlock for $410 million
June 2019: PureSec for $47 million
September 2019: Zingbox for $75 million
November 2019: Aporeto, Inc. for $150 million
April 2020: CloudGenix, Inc. for $420 million
August 2020: Crypsis Group for $265 million
November 2020: Palo Alto Networks announced its intent to acquire Expanse for $800 million.
February 2021: Bridgecrew for $156 million
November 2022: Cider Security for $300 million.
October 2023: Announced its intent to acquire Dig Security for $400 million
November 2023: Talon Cyber Security for $625 million
December 2023: Dig Security for $400 million
Threat research
Unit 42 is the Palo Alto Networks threat intelligence and security consulting team. They are a group of cybersecurity researchers and industry experts who use data collected by the company's security platform to discover new cyber threats, such as new forms of malware and malicious actors operating across the world. The group runs a popular blog where they post technical reports analyzing active threats and adversaries. Multiple Unit 42 researchers have been named in the MSRC Top 100, Microsoft's annual ranking of top 100 security researchers. In April 2020, the business unit consisting of Crypsis Group which provided digital forensics, incident response, risk assessment, and other consulting services merged with the Unit 42 threat intelligence team.
According to the FBI, Palo Alto Networks Unit 42 has helped solve multiple cybercrime cases, such as the Mirai Botnet and Clickfraud Botnet cases, the LuminosityLink RAT case, and assisted with "Operation Wire-Wire".
In 2018, Unit 42 discovered Gorgon, a hacking group believed to be operating out of Pakistan and targeting government organizations in the United Kingdom, Spain, Russia, and the United States. The group was detected sending spear-phishing emails attached to infected Microsoft Word documents using an exploit commonly used by cybercriminals and cyber-espionage campaigns.
In September 2018, Unit 42 discovered Xbash, a ransomware that also performs cryptomining, believed to be tied to the Chinese threat actor "Iron". Xbash is able to propagate like a worm and deletes databases stored on victim hosts. In October, Unit 42 warned of a new crypto mining malware, XMRig, that comes bundled with infected Adobe Flash updates. The malware uses the victim's computer's resources to mine Monero cryptocurrency.
In November 2018, Palo Alto Networks announced the discovery of "Cannon", a trojan being used to target United States and European government entities. The hackers behind the malware are believed to be Fancy Bear, the Russian hacking group believed to be responsible for hacking the Democratic National Committee in 2016. The malware communicates with its command and control server with email and uses encryption to evade detection.
References
External links
2005 establishments in California
2012 initial public offerings
Companies based in Santa Clara, California
Companies listed on the Nasdaq
Companies formerly listed on the New York Stock Exchange
Computer companies of the United States
Computer hardware companies
Computer security companies
Networking companies of the United States
Networking hardware companies
Technology companies based in the San Francisco Bay Area
Technology companies established in 2005 | Palo Alto Networks | Technology | 1,172 |
10,977,572 | https://en.wikipedia.org/wiki/Haradh%20gas%20plant | The Haradh Gas Plant is one of the major gas plants in Saudi Arabia. It is located near Haradh village, 300 km southwest of Dhahran. The plant has a capacity of producing 1.6 BSCFD of natural gas and 170,000 BBL/day of condensate (oil). The plant processes only non-associated gas. The plant is considered to be a mid-size, when compared to other sister plants in the region. However, the amount of oil processed is considered to be relatively large.
The plant started operating in April 2003.
References
Haradh
Natural gas in Saudi Arabia
Natural gas plants | Haradh gas plant | Chemistry | 129 |
10,385,693 | https://en.wikipedia.org/wiki/Squirrel%20Systems | Squirrel Systems is a Burnaby-based point of sale vendor specializing in hospitality management systems. Squirrel is based in Burnaby, Canada.
History
Squirrel Systems was founded in 1984, and released the first restaurant point of sale system to use an integrated diskless touchscreen terminal for order management. Originally a wholly owned subsidiary of Sulcus Hospitality Technologies Corporation, in 1998 Sulcus merged with Eltrax Systems, Incorporated (Nasdaq SmallCap: ELTX). Squirrel is currently a wholly owned subsidiary of Marin Investments Ltd.
Squirrel Workstation
One of the unique characteristics of Squirrel's original product was the use of hardened LCD touchscreen terminals. Unlike other systems that used keyboards and CRT monitors, Squirrel terminals had no moving parts and were easily adapted to any operating environment. The original Squirrel terminals reached over 35,000 installed units worldwide, and was the first to integrate an LCD panel, credit card reader, employee ID reader, and CPU inside a single unit. Later units would incorporate IP connectivity, remote booting of a customized Linux operating system, and a Java virtual machine.
Squirrel Embedded Linux
In 1998 Squirrel Systems released Squirrel Embedded Linux (SEL), a customized distribution of Linux for "thin client" terminal architecture. SEL has several characteristics that were unique at the time of development, including primary support for diskless workstations, customized high-volume touchscreen drivers, integrated Java virtual machine with hardware control, and two-stage booting from a Windows server.
Industry awards
In 2010, O'Charley's named Squirrel as its Enterprise Support Partner of the Year at the annual Inukshuk Business Partner Awards.
Squirrel Systems was awarded the 2009 Epson Envision Award for Innovation for its Squirrel in a Box product.
Squirrel Systems was awarded the 1999 Independent Cash Register Dealers Association Silver Award for Outstanding Sponsor in Systems/Software.
In 1998, Squirrel was the third recipient of the Microsoft Retail Application Developer award at the HITEC Show in Los Angeles. Microsoft recognized SquirrelONE as the first application to integrate Java, Microsoft SQL Server, and Windows NT in the retail market.
References
Further reading
Shift4 and Squirrel Systems Partner to Offer Payment Solution to the Hospitality Industry. - Entertainment Close-up | HighBeam Research
Squirrel One and Merchant Link Integrate Solutions - Wireless News | HighBeam Research
Payment Software from Squirrel Systems Certified by NetSPI as Compliant with Latest PA-DSS Standard. - Information Technology Newsweekly | HighBeam Research
External links
Technology companies of Canada
Companies based in Burnaby
Computer hardware companies
Companies established in 1984
Diskless workstations
Point of sale companies
1984 establishments in British Columbia | Squirrel Systems | Technology | 520 |
54,439,547 | https://en.wikipedia.org/wiki/Administration%20of%20Radioactive%20Substances%20Advisory%20Committee | The Administration of Radioactive Substances Advisory Committee (ARSAC) is an advisory non-departmental public body of the government of the United Kingdom. It is sponsored by the Department of Health.
The committee advises government on the certification of doctors and dentists who want to use radioactive medicinal products on people.
Doctors and dentists who use radioactive medicinal products (radiopharmaceuticals) on people must get a certificate from health ministers. This certificate allows them to use radioactive medicinal products in diagnosis, therapy and research.
ARSAC was set up to advise health ministers with respect to the grant, renewal, suspension, revocation and variation of certificates and generally in connection with the system of prior authorisation required by Article 5(a) of Council Directive 76/579/Euratom.
The majority of ARSAC's members are medical doctors who are appointed to the committee as independent experts in their field (for example nuclear medicine). The committee comments on applications in confidence to the ARSAC Support Unit, Public Health England. No individual committee member approves any single application.
An official from the Department of Health authorises successful applications on behalf of the Secretary of State.
See also
Centre for Radiation, Chemical and Environmental Hazards in Oxfordshire
References
External links
Nuclear medicine organizations
Non-departmental public bodies of the United Kingdom government | Administration of Radioactive Substances Advisory Committee | Engineering | 264 |
22,332,556 | https://en.wikipedia.org/wiki/VERSAdos | VERSAdos is an operating system dating back to the early 1980s for use on the Motorola 68000 development system called the EXORmacs which featured the VERSAbus and an array of option cards. They were typically connected to CDC Phoenix disk drives running one to four 14-inch platters. The EXORmacs was used to emulate a 680xx processor in-circuit, speeding development of 680xx based systems. It also hosted several compilers and assemblers.
VERSAdos and the EXORmacs were produced by Motorola's Microsystems Division.
Overview
VERSAdos was a real-time, multi-user operating system. It was the follow on product to the single user MDOS that ran the 6800 development system called the EXORciser.
Both systems features a harness with a CPU socket compatible connector.
A Modula 2 compiler was ported to VERSAdos.
Commands
The following list of commands and utilities are supported by VERSAdos.
^
ACCT
ARGUMENTS
ASSIGN
BACKUP
BATCH
BSTOP
BTERM
BUILDS
BYE
CANCEL
CHAIN
CLOSE
CONFIG
CONNECT
CONTINUE
COPY
CREF
DATE
DEFAULTS
DEL
DIR
DMT
DUMP
DUMPANAL
ELIMINATE
EMFGEN
END
FREE
HELP
INIT
LIB
LIST
LOAD
LOGOFF
MBLM
MERGEOS
MIGR
MT
NEWS
NOARGUMENTS
NOVALID
OFF
OPTION
PASS
PASSWORD
PATCH
PROCEED
PRTDUMP
QUERY
R?
RENAME
REPAIR
RETRY
SCRATCH
SECURE
SESSIONS
SNAPSHOT
SPL
SPOOL
SRCCOM
START
STOP
SWORD
SYSANAL
TERMINATE
TIME
TRANSFER
UPLOADS
USE
VALID
See also
CP/M-68K
References
External links
http://bitsavers.org/pdf/motorola/versados/
https://www.ricomputermuseum.org/collections-gallery/small-systems-at-ricm/motorola-exormacs-development-system
Discontinued operating systems
Motorola | VERSAdos | Technology | 369 |
6,752,269 | https://en.wikipedia.org/wiki/Guy%20Medal | The Guy Medals are awarded by the Royal Statistical Society in three categories; Gold, Silver and Bronze. The Silver and Bronze medals are awarded annually. The Gold Medal was awarded every three years between 1987 and 2011, but is awarded biennially as of 2019. They are named after William Guy.
The Guy Medal in Gold is awarded to fellows or others who are judged to have merited a signal mark of distinction by reason of their innovative contributions to the theory or application of statistics.
The Guy Medal in Silver is awarded to any fellow or, in exceptional cases, to two or more fellows in respect of a paper/papers of special merit communicated to the Society at its ordinary meetings, or in respect of a paper/papers published in any of the journals of the Society. General contributions to statistics may also be taken into account.
The Guy Medal in Bronze is awarded to fellows, or to non-fellows who are members of a section or a local group, in respect of a paper or papers read to a section or local group or at any conference run by the Society, its sections or local groups, or published in any of the Society's journals. Preference will be given to people under the age of 35. Exceptionally two or more authors of a paper/papers may be considered for the award provided they are members of sections or local groups.
Gold Medalists
Source:
1892 Charles Booth
1894 Robert Giffen
1900 Jervoise Athelstane Baines
1907 Francis Ysidro Edgeworth
1908 Patrick G. Craigie
1911 G. Udny Yule
1920 T. H. C. Stevenson
1930 A. William Flux
1935 Arthur Lyon Bowley
1945 Major Greenwood
1946 Ronald Fisher
1953 A. Bradford Hill
1955 Egon Pearson
1960 Frank Yates
1962 Harold Jeffreys
1966 Jerzy Neyman
1968 Maurice Kendall
1969 M. S. Bartlett
1972 Harald Cramér
1973 David Cox
1975 George Alfred Barnard
1978 Roy Allen
1981 David George Kendall
1984 Henry Daniels
1986 Bernard Benjamin
1987 Robin L. Plackett
1990 Peter Armitage
1993 George E. P. Box
1996 Peter Whittle
1999 Michael Healy
2002 Dennis Lindley
2005 John Nelder
2008 James Durbin
2011 C. R. Rao
2013 John Kingman
2014 Bradley Efron
2016 Adrian Smith
2019 Stephen Buckland
2020 David Spiegelhalter
2022 Nancy Reid
2024 Peter Diggle
Silver Medalists
1893 John Glover
1894 Augustus Sauerbeck
1895 Arthur Lyon Bowley
1897 Fred J. Atkinson
1899 Charles Stewart Loch
1900 Richard Crawford
1901 Thomas A. Welton
1902 R. H. Hooker
1903 Yves Guyot
1904 D. A. Thomas
1905 R. Henry Rew
1906 W. Napier Shaw
1907 Noel A. Humphreys
1909 Edward Brabrook
1910 G. H. Wood
1913 Reginald Dudfield
1914 Simon Rowson
1915 Sydney John Chapman
1918 J. Shield Nicholson
1919 J. C. Stamp
1921 A. William Flux
1927 H. W. Macrosty
1928 Ethel Newbold
1930 Herbert Edward Soper
1934 John Harry Jones
1935 Ernest Charles Snow
1936 Ralph George Hawtrey
1938 Edmund Cecil Ramsbottom
1939 Leon Isserlis
1940 Hector Leak
1945 Maurice Kendall
1950 Harry Campion
1951 F. A. A. Menzler
1952 M. S. Bartlett
1953 J. Oscar Irwin
1954 L. H. C. Tippett
1955 David George Kendall
1957 Henry Daniels
1958 George Barnard
1960 Edgar C. Fieller
1961 David Cox
1962 P. V. Sukhatme
1964 George Box
1965 C. R. Rao
1966 Peter Whittle
1968 Dennis Lindley
1973 Robin Plackett
1976 James Durbin
1977 John Nelder
1978 Peter Armitage
1979 Michael Healy
1980 Mervyn Stone
1981 John Kingman
1982 Henry Wynn
1983 Julian E. Besag
1984 John C. Gittins
1985 Derek Bissell and Wilfrid Pridmore
1986 Richard Peto
1987 John Copas
1988 John Aitchison
1989 Frank Kelly
1990 David Clayton
1991 Richard L. Smith
1992 Robert Curnow
1993 Adrian Smith
1994 David Spiegelhalter
1995 Bernard Silverman
1996 Steffen Lauritzen
1997 Peter Diggle
1998 Harvey Goldstein
1999 Peter Green
2000 Walter Gilks
2001 Philip Dawid
2002 David Hand
2003 Kanti Mardia
2004 Peter Donnelly
2005 Peter McCullagh
2006 Michael Titterington
2007 Howell Tong
2008 Gareth Roberts
2009 Sylvia Richardson
2010 Iain M. Johnstone
2011 Peter Hall
2012 David Firth
2013 Brian D. Ripley
2014 Jianqing Fan
2015 Anthony C. Davison
2016 Nancy Reid
2017 Neil Shephard
2018 Peter Bühlmann
2019 Susan Murphy
2020 Arnaud Doucet
2021 Håvard Rue
2022 Paul Fearnhead
2023 Mark Girolami
2024 Jonathan Tawn
Bronze Medalists
1936 William Gemmell Cochran
1938 Ronald Frank George
1949 W. J. Jennett
1962 Peter Armitage
1966 James Durbin
1967 Frank Downton
1968 Robin Plackett
1969 Malcolm C. Pike
1970 Peter G. Moore
1971 D. J. Bartholomew
1974 Graham N. Wilkinson
1975 Alfred Frederick Bissell
1976 P. L. Goldsmith
1977 A. F. M. Smith
1978 Philip Dawid
1979 T. M. F. Smith
1980 A. John Fox
1982 Stuart Pocock
1983 Peter McCullagh
1984 Bernard Silverman
1985 David Spiegelhalter
1986 D. F. Hendry
1987 Peter Green
1988 Sarah C. Darby
1989 Sheila M. Gore
1990 Valerie S. Isham
1991 Mike G. Kenward
1992 Christopher Jennison
1993 Jonathan Tawn
1994 Rosemary F. A. Poultney
1995 Iain Johnstone
1996 John N. S. Matthews
1997 Gareth Roberts
1998 David Firth
1999 Peter W. F. Smith and Jon Forster
2000 Jon Wakefield
2001 Guy Nason
2002 Geert Molenberghs
2003 Peter Lynn
2004 Nicola Best
2005 Steve Brooks
2006 Matthew Stephens
2007 Paul Fearnhead
2008 Fiona Steele
2009 Chris Holmes
2010 Omiros Papaspiliopoulos
2011 Nicolai Meinshausen
2012 Richard Samworth
2013 Piotr Fryzlewicz
2014 Ming Yuan
2015 Jinchi Lv
2017 Yingying Fan
2018 Peng Ding
2019 Jonas Peters
2020 Rachel McCrea
2021 Pierre E. Jacob
2022 Rajen Shah
2023 Tengyao Wang
2024 Chris Oates
See also
List of mathematics awards
References
External links
Guy Medal. Royal Statistical Society website.
Awards of the Royal Statistical Society
Awards established in 1892
1892 establishments in the United Kingdom
Mathematics awards
Statistical awards | Guy Medal | Technology | 1,259 |
1,809,910 | https://en.wikipedia.org/wiki/Smart%20gun | A smart gun, also called a smart-gun, or smartgun, is a firearm that can detect its authorized user(s) or something that is normally only possessed by its authorized user(s). The term is also used in science fiction to refer to various types of semi-automatic firearms.
Smart guns have one or more systems that allow them to fire only when activated by an authorized user. Those systems typically employ RFID chips or other proximity tokens, fingerprint recognition, magnetic rings, or mechanical locks. They can thereby prevent accidental shootings, gun thefts, and criminal usage by persons not authorized to use the guns.
Related to smart guns are other smart firearms safety devices such as biometric or RFID activated accessories and safes.
Commercial availability
No smart gun has ever been sold on the commercial market in the United States. The Armatix iP1, a .22 caliber handgun with an active RFID watch used to unlock it, is the most mature smart gun developed. It was briefly planned to be offered at a few retailers before being quickly withdrawn due to pressure from gun-rights advocates concerned that it would trigger the New Jersey Childproof Handgun Law.
As of 2019, a number of startups and companies including Armatix, Biofire, LodeStar Firearms, and Swiss company SAAR are purportedly developing various smart handguns and rifles, but none have brought the technology to market.
Reception
Reception to the concept of smart gun technology has been mixed. There have been public calls to develop the technology, most notably from President Obama. Gun-rights groups including the National Rifle Association of America have expressed concerns that the technology could be mandated, and some firearms enthusiasts are concerned that the technology wouldn't be reliable enough to trust.
National Rifle Association
The NRA and its membership boycotted Smith & Wesson after it was revealed in 1999 that the company was developing a smart gun for the U.S. government.
More recently, the official policy of the NRA-ILA, the lobbying arm of the NRA, with regards to smart guns, is as follows: "The NRA doesn't oppose the development of 'smart' guns, nor the ability of Americans to voluntarily acquire them. However, NRA opposes any law prohibiting Americans from acquiring or possessing firearms that don't possess "smart" gun technology."
Law enforcement
Some smart gun proponents have called for federal, state, and local police organizations to take the lead on adopting smart gun technology, either voluntarily or via purchasing mandate. There has been scattered support for voluntary test programs from some law enforcement leaders, including San Francisco Police Chief Greg Suhr, who has said, "Officer safety is huge, so you wouldn't want to compel that upon officers. But we have so many officers who are so into technology, I am all but certain there are officers that would be willing to do such a pilot.".
Richard Beary, president of the International Association of Chiefs of Police, said there would be "plenty of agencies interested in beta testing the technology" and that "[a smart gun] can't be 99 percent accurate, it has to be 100 percent accurate. It has to work every single time." James Pasco, executive director of the Fraternal Order of Police, which represents 325,000 officers nationwide, has stated, "Police officers in general, federal officers in particular, shouldn't be asked to be the guinea pigs in evaluating a firearm that nobody's even seen yet. We have some very, very serious questions."
New Jersey mandate
In the United States, New Jersey passed the Childproof Handgun Bill into state law on December 23, 2002, which would have required that all guns sold in the state of New Jersey have a mechanism to prevent unauthorized users from firing it, taking effect three years after such a smart gun is approved by the state. Weapons used by law enforcement officers would be exempt from the smart gun requirement. In July 2019, Governor Phil Murphy signed into law a bill which repealed substantially all of the original Childproof Handgun Law and replaced it with a requirement that after the state Attorney General approves a production model each firearms retailer in the state would be required to carry and display at least one smart gun on their shelves with "a sign... disclosing the features of personalized handguns that are not offered by traditional handguns".
The potential effects of New Jersey's smart gun law has also influenced opposition to the technology in the United States; two attempts to sell the Armatix iP1 smart gun in California and Maryland were met with opposition from gun rights groups, who argued that allowing the gun to be sold in the United States would trigger the law. In December 2014, the Attorney General of New Jersey determined that the Armatix iP1 would not meet the legal criteria sufficient to trigger the mandate.
Reliability concerns
Many firearm enthusiasts object to smart guns on a philosophical and regulatory basis. Gun ownership advocate Kenneth W. Royce, writing under the pen name of "Boston T. Party", wrote that "no defensive firearm should ever rely upon any technology more advanced than Newtonian physics. That includes batteries, radio links, encryption, scanning devices and microcomputers."
TechCrunch technology and outdoors journalist Jon Stokes summarizes the reliability concerns with smart guns stating,
Potential advantages
Gun owners
Smart firearms safety technology is intended to prevent the accidental use and misuse of firearms by children and teens, as well as reducing accidental discharges or the use of a firearm against its owner if the firearm is stolen or taken away. Smart guns may also reduce incidents of suicide by unauthorized users of a firearm.
Law enforcement
Law enforcement applications also hold promise; San Francisco Police Chief Greg Suhr went on record supporting smart guns for their potential to reduce the risk of having a law enforcement officer's gun used against him or her, and for rendering stolen guns unfireable. Richard Beary, president of the International Association of Chiefs of Police, was quoted in the Washington Post as saying there would be "plenty of agencies interested in beta testing the [smart gun] technology."
In October 2013 the European Commission published a document by commissioner Cecilia Malmström, stating that "the Commission will work with the firearms industry to explore technological solutions, such as biometric sensors where personal data is stored in the firearm, for ensuring that purchased firearms may only be used by their legal owner. It will carry out a detailed cost-benefit analysis on the question of making such 'smart gun' security features mandatory for firearms lawfully sold in the EU."
Potential disadvantages
Joseph Steinberg writes that "biometrics take time to process and are often inaccurate – especially when a user is under duress – as is likely going to be the case in any situation in which he needs to brandish a gun.... it is not ideal to add a requirement for power to devices utilized in cases of emergency that did not need electricity previously. How many fire codes allow fire extinguishers that require a battery to operate?" Steinberg further writes that "smartguns might be hackable" or "susceptible to government tracking or jamming...Firearms must be able to be disassembled in order to be cleaned and maintained. One of the principles of information security is that someone who has physical access to a machine can undermine its security." In a follow-up piece published in January 2016, Steinberg noted that smartguns that utilize wireless communications to detect that the shooter is wearing a watch, bracelet, or other device may "allow criminals (and police) to identify who is carrying a weapon" undermining "one of the reasons that some states require people to carry their weapons concealed; if all civilian-carried guns are concealed, criminals do not know who is carrying and who is not, so they have to fear mugging everyone, which protects the unarmed as well as the armed."
According to an article on an NRA website, other concerns are that smart guns may make a firearm more likely to fail when needed for self-defense. "Batteries go dead, temperature or moisture can harm electronics and many 'smart gun' designs, such as Armatix's iP1, require that a person wear a watch, bracelet, or other device." Smart guns may also take considerable time to be ready for firing from a "cold start."
In science fiction
Smart guns are commonly used in science fiction, where they may not only have biometric identification, but also have auto-aiming capabilities or smart bullets. A prominent example is the Lawgiver used by Judge Dredd, which is linked to his DNA. Another is the M56 Smart Gun from Aliens, which is carried via a waist-mounted robotic arm. The concept was later used in a U.S. Army prototype, although engineers moved the mount from the waist to the back, due to ergonomic issues.
See also
Locationized gun
Sentry gun
References
External links
Long, Duncan (July 20, 2002). "Do You Really Need a Smart Gun?" duncanlong.com
Rosenwald, Michael S. (May 2, 2014). "Threats against Maryland gun dealer raise doubts about future of smart guns". The Washington Post
Smart devices
Trial and research firearms
Gun politics in the United States
Firearm safety
Biometrics
Science fiction weapons
Fictional firearms | Smart gun | Technology | 1,898 |
9,591,516 | https://en.wikipedia.org/wiki/Seer%20%28unit%29 | A Seer (also sihr) is a traditional unit of mass and volume used in large parts of Asia prior to the middle of the 20th century. It remains in use only in a few countries such as Afghanistan, Iran, and parts of India although in Iran it indicates a smaller unit of weight than the one used in India.
India
In India, the seer was a traditional unit used mostly in Northern India including Hindi speaking region, Telangana in South. Officially, seer was defined by the Standards of Weights and Measures Act (No. 89 of 1956, amended in 1960 and 1964) as being exactly equal to . However, there were many local variants of the seer in India. Note the chart below gives maund weight for Mumbai, divide by 40 to get a seer.
Aden, Nepal and Pakistan
In Aden (Oman), Nepal, and Pakistan a seer was approximately derived from the Government seer of British colonial days.
Afghanistan
In Afghanistan, it was a unit of mass, approximately .
Persia/Iran
In Persia (and later Iran), it was and remains in two units:
The metric seer was
The seer (sihr) was
The smaller weight is now part of the national weight system in Iran and is used on daily basis for small measures of delicate foodstuff and choice produce.
Sri Lanka
In Sri Lanka, it was a measure of capacity, approximately .
See also
List of customary units of measurement in South Asia
References
Units of mass
Units of volume
Customary units in India
Obsolete units of measurement | Seer (unit) | Physics,Mathematics | 309 |
9,502,948 | https://en.wikipedia.org/wiki/Bottleneck%20%28network%29 | In a communication network, sometimes a max-min fairness of the network is desired, usually opposed to the basic first-come first-served policy. With max-min fairness, data flow between any two nodes is maximized, but only at the cost of more or equally expensive data flows. To put it another way, in case of network congestion any data flow is only impacted by smaller or equal flows.
In such context, a bottleneck link for a given data flow is a link that is fully utilized (is saturated) and of all the flows sharing this link, the given data flow achieves maximum data rate network-wide. Note that this definition is substantially different from a common meaning of a bottleneck. Also note, that this definition does not forbid a single link to be a bottleneck for multiple flows.
A data rate allocation is max-min fair if and only if a data flow between any two nodes has at least one bottleneck link. This concept is critical in understanding network efficiency and fairness, as it ensures that no single flow can monopolize network resources to the detriment of others.
Bottleneck links are significant in network design and management because they determine the maximum throughput of a network. Identifying and managing bottlenecks is crucial for maintaining optimal performance in networked systems. Strategies to mitigate the impact of bottleneck links include increasing the capacity of the bottleneck link, optimizing traffic management, and using load-balancing techniques to distribute data flows across multiple paths.
See also
Fairness measure
Max-min fairness
References
Notes
Network performance | Bottleneck (network) | Technology | 317 |
172,190 | https://en.wikipedia.org/wiki/Castor%20oil | Castor oil is a vegetable oil pressed from castor beans, the seeds of the plant Ricinus communis. The seeds are 40 to 60 percent oil. It is a colourless or pale yellow liquid with a distinct taste and odor. Its boiling point is and its density is 0.961 g/cm3. It includes a mixture of triglycerides in which about 90 percent of fatty acids are ricinoleates. Oleic acid and linoleic acid are the other significant components.
Some 270,000–360,000 tonnes (600–800 million pounds) of castor oil are produced annually for a variety of uses. Castor oil and its derivatives are used in the manufacturing of soaps, lubricants, hydraulic and brake fluids, paints, dyes, coatings, inks, cold-resistant plastics, waxes and polishes, nylon, and perfumes.
Etymology
The name probably comes from a confusion between the Ricinus plant that produces it and another plant, the Vitex agnus-castus. An alternative etymology, though, suggests that it was used as a replacement for castoreum.
History
Use of castor oil as a laxative is attested to in the Ebers Papyrus, and it was in use several centuries earlier. Midwifery manuals from the 19th century recommended castor oil and 10 drops of laudanum for relieving "false pains."
Composition
Castor oil is well known as a source of ricinoleic acid, a monounsaturated, 18-carbon fatty acid. Among fatty acids, ricinoleic acid is unusual in that it has a hydroxyl functional group on the 12th carbon atom. This functional group causes ricinoleic acid (and castor oil) to be more polar than most fats. The chemical reactivity of the alcohol group also allows chemical derivatization that is not possible with most other seed oils.
Because of its ricinoleic acid content, castor oil is a valuable chemical in feedstocks, commanding a higher price than other seed oils. As an example, in July 2007, Indian castor oil sold for about US$0.90/kg ($0.41/lb), whereas U.S. soybean, sunflower, and canola oils sold for about $0.30/kg ($0.14/lb).
Human uses
Castor oil has been used orally to relieve constipation or to evacuate the bowel before intestinal surgery. The laxative effect of castor oil is attributed to ricinoleic acid, which is produced by hydrolysis in the small intestine. Use of castor oil for simple constipation is medically discouraged because it may cause violent diarrhea.
Food and preservative
In the food industry, food-grade castor oil is used in food additives, flavorings, candy (e.g., polyglycerol polyricinoleate in chocolate), as a mold inhibitor, and in packaging. Polyoxyethylated castor oil (e.g., Kolliphor EL) is also used in the food industries. In India, Pakistan, and Nepal, food grains are preserved by the application of castor oil. It stops rice, wheat, and pulses from rotting. For example, the legume pigeon pea is commonly available coated in oil for extended storage.
Emollient
Castor oil has been used in cosmetic products included in creams and as a moisturizer. It is often combined with zinc oxide to form an emollient and astringent, zinc and castor oil cream, which is commonly used to treat infants for nappy rash.
Medicine
Castor oil is used as a vehicle for serums administering steroid hormones such as estradiol valerate via intramuscular or subcutaneous injection.
Alternative medicine
Despite the lack of evidence, castor oil is sometimes claimed to be able to cure diseases. According to the American Cancer Society, "available scientific evidence does not support claims that castor oil on the skin cures cancer or any other disease."
Childbirth
Despite some undesirable side effects, castor oil is used for labor induction. There is no high-quality research proving that ingestion of castor oil results in cervical ripening or induction of labor; there is, however, evidence that taking it causes nausea and diarrhea. A systematic review of "three trials, involving 233 women, found there has not been enough research done to show the effects of castor oil on ripening the cervix or inducing labour or compare it to other methods of induction. The review found that all women who took castor oil by mouth felt nauseous. More research is needed into the effects of castor oil to induce labour." Castor oil is still used for labor induction in environments where modern drugs are not available; a review of pharmacologic, mechanical, and "complementary" methods of labor induction published in 2024 by the American Journal of Obstetrics and Gynecology stated that castor oil's physiological effect is poorly understood but "given gastrointestinal symptomatology, a prostaglandin mediation has been suggested but not confirmed." According to Drugs in Pregnancy and Lactation: A Reference Guide to Fetal and Neonatal Risk (2008), castor oil should not be ingested or used topically by pre-term pregnant women. There is no data on the potential toxicity of castor oil for nursing mothers.
Punishment
Since children commonly strongly dislike the taste of castor oil, some parents punished children with a dose of it. Physicians recommended against the practice because it may associate medicines with punishment and make children afraid of the doctor.
Use in torture
A heavy dose of castor oil could be used as a humiliating punishment for adults. Colonial officials used it in the British Raj (India) to deal with recalcitrant servants.
Belgian military officials prescribed heavy doses of castor oil in Belgian Congo as a punishment for being too sick to work. Castor oil was also a tool of punishment favored by the Falangist and later Francoist Spain during and following the Spanish Civil War. Its use as a form of gendered violence to repress women was especially prominent.
This began during the war where Nationalist forces would specifically target Republican-aligned women, both troops and civilians, who lived in Republican-controlled areas.
The forced drinking of castor oil occurred alongside sexual assault, rape, torture and murder of these women. Its most notorious use as punishment came in Fascist Italy under Benito Mussolini. It was a favorite tool used by the Blackshirts to intimidate and humiliate their opponents.
Political dissidents were force-fed large quantities of castor oil by fascist squads so as to induce bouts of extreme diarrhea in the victims. This technique was said to have been originated by Gabriele D'Annunzio or Italo Balbo. This form of torture was potentially deadly, as the administration of the castor oil was often combined with nightstick beatings, especially to the rear, so that the resulting diarrhea would not only lead to dangerous dehydration but also infect the open wounds from the beatings. However, even those victims who survived had to bear the humiliation of the laxative effects resulting from excessive consumption of the oil.
Industrial uses
Coatings
Castor oil is used as a biobased polyol in the polyurethane industry. The average functionality (number of hydroxyl groups per triglyceride molecule) of castor oil is 2.7, so it is widely used as a rigid polyol and in coatings.
One particular use is in a polyurethane concrete where a castor-oil emulsion is reacted with an isocyanate (usually polymeric methylene diphenyl diisocyanate) and a cement and construction aggregate. This is applied fairly thickly as a slurry, which is self-levelling. This base is usually further coated with other systems to build a resilient floor. Castor oil is not a drying oil, meaning that it has a low reactivity with air compared with oils such as linseed oil and tung oil. However, dehydration of castor oil yields linoleic acids, which do have drying properties.
In this process, the OH group on the ricinoleic acid along with a hydrogen from the next carbon atom are removed, forming a double bond which then has oxidative cross-linking properties and yields the drying oil. It is considered a vital raw material.
Chemical precursor
Castor oil can react with other materials to produce other chemical compounds that have numerous applications.
Transesterification followed by steam cracking gives undecylenic acid, a precursor to specialized polymer nylon 11, and heptanal, a component in fragrances.
Breakdown of castor oil in strong base gives 2-octanol, both a fragrance component and a specialized solvent, and the dicarboxylic acid sebacic acid. Hydrogenation of castor oil saturates the alkenes, giving a waxy lubricant.
Castor oil may be epoxidized by reacting the OH groups with epichlorohydrin to make the triglycidyl ether of castor oil which is useful in epoxy technology.
This is available commercially as Heloxy 505.
The production of lithium grease consumes a significant amount of castor oil. Hydrogenation and saponification of castor oil yields 12-hydroxystearic acid, which is then reacted with lithium hydroxide or lithium carbonate to give high-performance lubricant grease.
Since it has a relatively high dielectric constant (4.7), highly refined and dried castor oil is sometimes used as a dielectric fluid within high-performance, high-voltage capacitors.
Lubrication
Vegetable oils such as castor oil are typically unattractive alternatives to petroleum-derived lubricants because of their poor oxidative stability. Castor oil has better low-temperature viscosity properties and high-temperature lubrication than most vegetable oils, making it useful as a lubricant in jet, diesel, and racing engines. The viscosity of castor oil at 10 °C is 2,420 centipoise, but it tends to form gums in a short time, so its usefulness is limited to engines that are regularly rebuilt, such as racing engines. Lubricant company Castrol took its name from castor oil.
Castor oil has been suggested as a lubricant for bicycle pumps because it does not degrade natural rubber seals.
Turkey red oil
Turkey red oil, also called sulphonated (or sulfated) castor oil, is made by adding sulfuric acid to vegetable oils, most notably castor oil. It was the first synthetic detergent after ordinary soap. It is used in formulating lubricants, softeners, and dyeing assistants.
Biodiesel
Castor oil, like currently less expensive vegetable oils, can be used as feedstock in the production of biodiesel. The resulting fuel is superior for cold winters, because of its exceptionally low cloud point and pour point.
Initiatives to grow more castor for energy production, in preference to other oil crops, are motivated by social considerations. Tropical subsistence farmers would gain a cash crop.
Early aviation and aeromodelling
Castor oil was the preferred lubricant for rotary engines, such as the Gnome engine after that engine's widespread adoption for aviation in Europe in 1909. It was used almost universally in rotary-engined Allied aircraft in World War I. Germany had to make do with inferior ersatz oil for its rotary engines, which resulted in poor reliability.
The methanol-fueled, two-cycle, glow-plug engines used for aeromodelling, since their adoption by model airplane hobbyists in the 1940s, have used varying percentages of castor oil as lubricants. It is highly resistant to degradation when the engine has its fuel-air mixture leaned for maximum engine speed. Gummy residues can still be a problem for aeromodelling powerplants lubricated with castor oil, however, usually requiring eventual replacement of ball bearings when the residue accumulates within the engine's bearing races. One British manufacturer of sleeve valved four-cycle model engines has stated the "varnish" created by using castor oil in small percentages can improve the pneumatic seal of the sleeve valve, improving such an engine's performance over time.
Safety
The castor seed contains ricin, a toxic lectin. Heating during the oil extraction process denatures and deactivates the lectin. Harvesting castor beans, though, may not be without risk. The International Castor Oil Association FAQ document states that castor beans contain an allergenic compound called CB1A. This chemical is described as being virtually nontoxic, but has the capacity to affect people with hypersensitivity. The allergen may be neutralized by treatment with a variety of alkaline agents. The allergen is not present in the castor oil itself.
See also
Botanol, a flooring material derived from castor oil
Castor wax
List of unproven and disproven cancer treatments
References
Further reading
– overview of chemical properties and manufacturing of castor oil
External links
Ayurvedic medicaments
Castor oil plant
Cosmetics chemicals
Laxatives
Liquid dielectrics
Non-petroleum based lubricants
Oils
Traditional medicine
Triglycerides | Castor oil | Chemistry | 2,815 |
46,229,692 | https://en.wikipedia.org/wiki/Berkeley%20cardinal | In set theory, Berkeley cardinals are certain large cardinals suggested by Hugh Woodin in a seminar at the University of California, Berkeley in about 1992.
A Berkeley cardinal is a cardinal κ in a model of Zermelo–Fraenkel set theory with the property that for every transitive set M that includes κ and α < κ, there is a nontrivial elementary embedding of M into M with α < critical point < κ. Berkeley cardinals are a strictly stronger cardinal axiom than Reinhardt cardinals, implying that they are not compatible with the axiom of choice.
A weakening of being a Berkeley cardinal is that for every binary relation R on Vκ, there is a nontrivial elementary embedding of (Vκ, R) into itself. This implies that we have elementary
j1, j2, j3, ...
j1: (Vκ, ∈) → (Vκ, ∈),
j2: (Vκ, ∈, j1) → (Vκ, ∈, j1),
j3: (Vκ, ∈, j1, j2) → (Vκ, ∈, j1, j2),
and so on. This can be continued any finite number of times, and to the extent that the model has dependent choice, transfinitely. Thus, plausibly, this notion can be strengthened simply by asserting more dependent choice.
While all these notions are incompatible with Zermelo–Fraenkel set theory (ZFC), their consequences do not appear to be false. There is no known inconsistency with ZFC in asserting that, for example:
For every ordinal λ, there is a transitive model of ZF + Berkeley cardinal that is closed under λ sequences.
See also
List of large cardinal properties
References
Sources
Large cardinals | Berkeley cardinal | Mathematics | 376 |
4,842,646 | https://en.wikipedia.org/wiki/Reflex%20receiver | A reflex radio receiver, occasionally called a reflectional receiver, is a radio receiver design in which the same amplifier is used to amplify the high-frequency radio signal (RF) and low-frequency audio (sound) signal (AF). It was first invented in 1914 by German scientists Wilhelm Schloemilch and Otto von Bronk, and rediscovered and extended to multiple tubes in 1917 by Marius Latour and William H. Priess. The radio signal from the antenna and tuned circuit passes through an amplifier, is demodulated in a detector which extracts the audio signal from the radio carrier, and the resulting audio signal passes again through the same amplifier for audio amplification before being applied to the earphone or loudspeaker. The reason for using the amplifier for "double duty" was to reduce the number of active devices, vacuum tubes or transistors, required in the circuit, to reduce the cost. The economical reflex circuit was used in inexpensive vacuum tube radios in the 1920s, and was revived again in simple portable tube radios in the 1930s.
How it works
The block diagram shows the general form of a simple reflex receiver. The receiver functions as a tuned radio frequency (TRF) receiver. The radio frequency (RF) signal from the tuned circuit (bandpass filter) is amplified, then passes through the high pass filter to the demodulator, which extracts the audio frequency (AF) (modulation) signal from the carrier wave. The audio signal is added back into the input of the amplifier, and is amplified again. At the output of the amplifier the audio is separated from the RF signal by the low pass filter and is applied to the earphone. The amplifier could be a single stage or multiple stages. It can be seen that since each active device (tube or transistor) is used to amplify the signal twice, the reflex circuit is equivalent to an ordinary receiver with double the number of active devices.
The reflex receiver should not be confused with a regenerative receiver, in which the same signal is fed back from the output of the amplifier to its input. In the reflex circuit it is only the audio extracted by the demodulator which is added to the amplifier input, so there are two separate signals at different frequencies passing through the amplifier at the same time.
The reason the two signals, the RF and AF currents, can pass simultaneously through the amplifier without interfering is due to the superposition principle because the amplifier is linear. Since the two signals have different frequencies, they can be separated at the output with frequency selective filters. Therefore the proper functioning of the circuit depends on the amplifier operating in the linear region of its transfer curve. If the amplifier is significantly nonlinear, intermodulation distortion will occur and the audio signal will modulate the RF signal, resulting in audio feedback which can cause a shrieking in the earphone. The presence of the audio return circuit from the amplifier output to input made the reflex circuit vulnerable to such parasitic oscillation problems.
Applications
The most common application of the reflex circuit in the 1920s was in inexpensive single tube receivers, because many consumers could not afford more than one vacuum tube, and the reflex circuit got the most out of a single tube, it was equivalent to a two-tube set. During this period the demodulator was usually a carborundum point contact diode, but sometimes a vacuum tube grid-leak detector. However multitube receivers like the TRF and superheterodyne were also made with some of their amplifier stages "reflexed".
Low cost mains-powered radios that used a reflex TRF design, with only three tubes, were still being mass produced in the late 1940s.
The reflex principle was used in compact superheterodyne radio receivers from the 1930s and continued into the 1950s, until at least 1959; the intermediate frequency amplifier stage was also the first audio frequency stage using a reflex arrangement. That arrangement provided similar performance, in a four-tube radio, as one with five tubes. Often, but not always, such reflex receivers did not have Automatic Gain Control (AGC), and it was usually not possible to reduce the volume completely to zero, even at the minimum volume setting. At least one type of tube was specially designed for this kind of receiver design.
Example
The diagram (right) shows one of the most common single tube reflex circuits from the early 1920s. It functioned as a TRF receiver with one stage of RF and one stage of audio amplification. The radio frequency (RF) signal from the antenna passes through the bandpass filter C1, L1, L2, C2 and is applied to the grid of the directly heated triode, V1. The capacitor C6 bypasses the RF signal around the audio transformer winding T2 which would block it. The amplified signal from the plate of the tube is applied to the RF transformer L3, L4 while C3 bypasses the RF signal around the headphone coils. The tuned secondary L4, C5 which is tuned to the input frequency, serves as a second bandpass filter as well as blocking the audio signal in the plate circuit from getting to the detector. Its output is rectified by semiconductor diode D, which was a carborundum point contact type.
The resulting audio signal extracted by the diode from the RF signal is coupled back into the grid circuit by audio transformer T1, T2 whose iron core serves as a choke to help prevent RF from getting back into the grid circuit and causing feedback. The capacitor C4 provides more protection against feedback, blocking the pulses of RF from the diode, but is usually not needed since the transformer's winding T1 normally has enough parasitic capacitance. The audio signal is applied to the grid of the tube and amplified. The amplified audio signal from the plate passes easily through the low inductance RF primary winding L3 and is applied to the earphones T. The rheostat R1 controlled the filament current, and in these early sets was used as a volume control.
References
External links
Schematic of FADA model 160 neutrodyne radio, a reflectional receiver from the 1920s.
Schematic of General Electric model F40 radio, a Super-Heterodyne receiver first manufactured in 1937.
History of radio technology
Receiver (radio)
Radio electronics | Reflex receiver | Engineering | 1,305 |
332,389 | https://en.wikipedia.org/wiki/Anastrozole | Anastrozole, sold under the brand name Arimidex among others, is an antiestrogenic medication used in addition to other treatments for breast cancer. Specifically it is used for hormone receptor-positive breast cancer. It has also been used to prevent breast cancer in those at high risk. It is taken by mouth.
Common side effects of anastrozole include hot flashes, altered mood, joint pain, and nausea. Severe side effects include an increased risk of heart disease and osteoporosis. Use during pregnancy may harm the baby. Anastrozole is in the aromatase-inhibiting family of medications. It works by blocking the production of estrogens in the body, and hence has antiestrogenic effects.
Anastrozole was patented in 1987 and was approved for medical use in 1995. It is on the World Health Organization's List of Essential Medicines. Anastrozole is available as a generic medication. In 2022, it was the 179th most commonly prescribed medication in the United States, with more than 2million prescriptions.
Medical uses
Breast cancer
Anastrozole is used in the treatment and prevention of breast cancer in women. The Arimidex, Tamoxifen, Alone or in Combination (ATAC) trial was of localized breast cancer and women received either anastrozole, the selective estrogen receptor modulator tamoxifen, or both for five years, followed by five years of follow-up. After more than 5 years the group that received anastrozole had better results than the tamoxifen group. The trial suggested that anastrozole is the preferred medical therapy for postmenopausal women with localized estrogen receptor-positive breast cancer.
Early puberty
Anastrozole is used at a dosage of 0.5 to 1 mg/day in combination with the antiandrogen bicalutamide in the treatment of peripheral precocious puberty, for instance due to familial male-limited precocious puberty (testotoxicosis) and McCune–Albright syndrome, in boys.
Available forms
Anastrozole is available in the form of 1 mg oral tablets. No alternative forms or routes are available.
Contraindications
Contraindications of anastrozole include hypersensitivity to anastrozole or any other component of anastrozole formulations, pregnancy, and breastfeeding. Hypersensitivity reactions to anastrozole including anaphylaxis, angioedema, and urticaria have been observed.
Side effects
Common side effects of anastrozole (≥10% incidence) include hot flashes,
asthenia, arthritis, pain, arthralgia, hypertension, depression, nausea and vomiting, rash, osteoporosis, bone fractures, back pain, insomnia, headache, bone pain, peripheral edema, coughing, dyspnea, pharyngitis, and lymphedema. Serious but rare adverse effects (<0.1% incidence) include skin reactions such as lesions, ulcers, or blisters; allergic reactions with swelling of the face, lips, tongue, and/or throat that may cause difficulty swallowing or breathing; and abnormal liver function tests as well as hepatitis.
Interactions
Anastrozole is thought to have clinically negligible inhibitory effects on the cytochrome P450 enzymes CYP1A2, CYP2A6, CYP2D6, CYP2C8, CYP2C9, and CYP2C19. As a result, it is thought that drug interactions of anastrozole with cytochrome P450 substrates are unlikely. No clinically significant drug interactions have been reported with anastrozole as of 2003.
Anastrozole does not affect circulating levels of tamoxifen or its major metabolite N-desmethyltamoxifen. However, tamoxifen has been found to decrease steady-state area-under-the-curve levels of anastrozole by 27%. But estradiol levels were not significantly different in the group that received both anastrozole and tamoxifen compared to the anastrozole alone group, so the decrease in anastrozole levels is not thought to be clinically important.
Pharmacology
Pharmacodynamics
Anastrozole works by reversibly binding to the aromatase enzyme, and through competitive inhibition blocks the conversion of androgens to estrogens in peripheral (extragonadal) tissues. The medication has been found to achieve 96.7% to 97.3% inhibition of aromatase at a dosage of 1 mg/day and 98.1% inhibition of aromatase at a dosage of 10 mg/day in humans. As such, 1 mg/day is considered to be the minimal dosage required to achieve maximal suppression of aromatase with anastrozole. This decrease in aromatase activity results in an at least 85% decrease in estradiol levels in postmenopausal women. Levels of corticosteroids and other adrenal steroids are unaffected by anastrozole.
Pharmacokinetics
The bioavailability of anastrozole in humans is unknown, but it was found to be well-absorbed in animals. Absorption of anastrozole is linear over a dosage range of 1 to 20 mg/day in humans and does not change with repeated administration. Food does not significantly influence the extent of absorption of anastrozole. Peak levels of anastrozole occur a median 3 hours after administration, but with a wide range of 2 to 12 hours. Steady-state levels of anastrozole are achieved within 7 to 10 days of continuous administration, with 3.5-fold accumulation. However, maximal suppression of estradiol levels occurs within 3 or 4 days of therapy.
Active efflux of anastrozole by P-glycoprotein at the blood–brain barrier has been found to limit the central nervous system penetration of anastrozole in rodents, whereas this was not the case with letrozole and vorozole. As such, anastrozole may have peripheral selectivity in humans, although this has yet to be confirmed. In any case, estradiol is synthesized peripherally and readily crosses the blood–brain barrier, so anastrozole would still expected to reduce estradiol levels in the central nervous system to a certain degree. The plasma protein binding of anastrozole is 40%.
The metabolism of anastrozole is by N-dealkylation, hydroxylation, and glucuronidation. Inhibition of aromatase is due to anastrozole itself rather than to metabolites, with the major circulating metabolite being inactive. The elimination half-life of anastrozole is 40 to 50 hours (1.7 to 2.1 days). This allows for convenient once-daily administration. The medication is eliminated predominantly by metabolism in the liver (83 to 85%) but also by residual excretion by the kidneys unchanged (11%). Anastrozole is excreted primarily in urine but also to a lesser extent in feces.
Chemistry
Anastrozole is a nonsteroidal benzyl triazole. It is also known as α,α,α',α'-tetramethyl-5-(1H-1,2,4-triazol-1-ylmethyl)-m-benzenediacetonitrile. Anastrozole is structurally related to letrozole, fadrozole, and vorozole, with all being classified as azoles.
History
Anastrozole was patented by Imperial Chemical Industries (ICI) in 1987 and was approved for medical use, specifically the treatment of breast cancer, in 1995.
Society and culture
Generic names
Anastrozole is the generic name of the drug and its , , , and .
Brand names
Anastrozole is primarily sold under the brand name Arimidex. However, it is also marketed under a variety of other brand names throughout the world.
Availability
Anastrozole is available widely throughout the world.
Research
Anastrozole is surprisingly ineffective at treating gynecomastia, in contrast to selective estrogen receptor modulators like tamoxifen.
Anastrozole was under development for the treatment of female infertility but did not complete development and hence was never approved for this indication.
An anastrozole and levonorgestrel vaginal ring (developmental code name BAY 98–7196) was under development for use as a hormonal contraceptive and treatment for endometriosis, but development was discontinued in November 2018 and the formulation was never marketed.
Anastrozole increases testosterone levels in males and has been studied as an alternative method of androgen replacement therapy in men with hypogonadism. However, there are concerns about its long-term influence on bone mineral density in this patient population, as well as other adverse effects.
References
27-Hydroxylase inhibitors
Aromatase inhibitors
Drugs developed by AstraZeneca
Hormonal antineoplastic drugs
Nitriles
Peripherally selective drugs
Triazoles
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Anastrozole | Chemistry | 1,995 |
37,003,661 | https://en.wikipedia.org/wiki/Sentinel%20Space%20Telescope | The Sentinel Space Telescope was a space observatory to be developed by Ball Aerospace & Technologies for the B612 Foundation. The B612 Foundation is dedicated to protecting the Earth from dangerous asteroid strikes and Sentinel was to be the Foundation's first spacecraft tangibly to address that mission.
The space telescope was intended to locate and catalog 90% of the asteroids greater than in diameter that exist in near-Earth orbits. The telescope would have orbited the Sun in a Venus-like orbit (i.e. between Earth and the Sun). This orbit would allow it clearly to view the night half of the sky every 20 days, and pick out objects that are often difficult, if not impossible, to see in advance from Earth." Sentinel would have had an operational mission life of six and a half to ten years.
After NASA terminated their funding agreement with the B612 Foundation in October 2015 and the private fundraising goals could not be met, the Foundation eventually opted for an alternative approach using a constellation of much smaller spacecraft under study . NASA/JPL's NEOCam has been proposed instead.
History
The B612 project grew out of a one-day workshop on asteroid deflection organized by Piet Hut and Ed Lu at NASA Johnson Space Center, Houston, Texas on October 20, 2001. Participants Rusty Schweickart, Clark Chapman, Piet Hut, and Ed Lu established the B612 Foundation on October 7, 2002. The Foundation originally planned to launch Sentinel by December 2016 and to begin data retrieval no later than 6 months after successful positioning.
In April 2013, the plan had moved out to launching on a SpaceX Falcon 9 in 2018, following preliminary design review in 2014, and critical design review in 2015.
, B612 was attempting to raise approximately $450 million in total to fund the total development and launch cost of Sentinel, at a rate of some $30 to $40 million per year. That funding profile excludes the advertised 2018 launch date.
Cancellation
After NASA terminated their $30 million funding agreement with the B612 Foundation in October 2015 and the private fundraising did not achieve its goals, the Foundation eventually opted for an alternative approach using a constellation of much smaller spacecraft which is under study . NASA/JPL's NEOCam has been proposed instead.
Mission
Unlike similar projects to search for near-Earth asteroids or near-Earth objects (NEOs) such as NASA's Near-Earth Object Program, Sentinel would have orbited between Earth and the Sun. Since the Sun would therefore always have been behind the lens of the telescope, it would have never inhibited the telescope's ability to detect NEOs and Sentinel would have been able to perform continuous observation and analysis.
Sentinel was anticipated to be capable of detecting 90% of the asteroids greater than 140 meters in diameter that exist in Earth's orbit, which pose existential risk to humanity. The B612 Foundation estimates that approximately half a million asteroids in Earth's neighbourhood equal or exceed the one that struck Tunguska in 1908. It was planned to be launched atop the Falcon 9 rocket designed and manufactured by the private aerospace company SpaceX in 2019, and to be maneuvered into position with the help of the gravity of Venus. Data gathered by the Sentinel Project would have been provided through an existing network of scientific data-sharing that includes NASA and academic institutions such as the Minor Planet Center in Cambridge, Massachusetts.
Given the satellite's telescopic accuracy, Sentinel's data was speculated to prove valuable for future missions in such fields as asteroid mining.
Specifications
The telescope was intended to measure by mass and would have orbited the Sun at a distance of approximately in the same orbital distance as Venus. It would have employed infrared astronomy methods to identify asteroids against the cold of outer space. The B612 Foundation worked in partnership with Ball Aerospace to construct Sentinel's aluminum mirror, which would have captured the large field of view.
"Sentinel will scan in the 7- to 15-micron wavelength using a 0.5-meter infrared telescope across a 5.5 by 2-deg. field of view. The [infrared] IR array would have consisted of 16 detectors, and coverage would have scanned a 200-degree, full-angle field of regard."
Features
Key features included:
Most capable NEO detection system in operation;
200 degree anti-sun Field of Regard, with a 2×5.5 degree Field of View at any point in time: scans 165 square degrees per hour looking for moving objects;
Precise pointing accuracy to sub-pixel resolution for imaging revisit, using the detector fine steering capability;
Designed for highly autonomous, reliable operation requiring only weekly ground contact;
Designed for 6.5 to 10 years of surveying operations. Actively cooled to 40K using a Ball Aerospace two-stage, closed-cycle Stirling-cycle cryocooler;
Ability to follow up on objects of interest.
Issues
See also
4179 Toutatis
Asteroid deflection
Asteroid mining
Asteroid Terrestrial-impact Last Alert System (ATLAS)
B612 Foundation
Lists of telescopes
Near-Earth Object Surveillance Mission
NEOShield
Spaceguard
Spaceguard Foundation
References
Asteroid surveys
Infrared telescopes
Minor-planet discovering observatories
Near-Earth object tracking
Planetary defense
Space telescopes
Science and technology in the San Francisco Bay Area | Sentinel Space Telescope | Astronomy | 1,068 |
35,306,935 | https://en.wikipedia.org/wiki/Global%20Internet%20usage | Global Internet usage is the number of people who use the Internet worldwide.
Internet users
In 2015, the International Telecommunication Union estimated about 3.2 billion people, or almost half of the world's population, would be online by the end of the year. Of them, about 2 billion would be from developing countries, including 89 million from least developed countries. According to Hootsuite, the number of Global Internet users has already reached almost 5 billion, or about 53% of the global population as of 2021. The flat world of information has been created thanks to the Internet and globalization. This phenomenon allows individuals to have access to cultural and ideological beliefs without having to go to other countries, resulting in immobile acculturation.
Broadband usage
Internet hosts
The Internet Systems Consortium provides account for the number of the worldwide number of IPv4 hosts (see below). On 2019 this internet domain survey was discontinued as it does not account of IPv6 hosts, and therefore might be misleading.
Web index
The Web index is a composite statistic designed and produced by the World Wide Web Foundation. It provides a multi-dimensional measure of the World Wide Web's contribution to development and human rights globally. It covers 86 countries as of 2014, the latest year for which the index has been compiled. It incorporates indicators that assess the areas of universal access, freedom and openness, relevant content, and empowerment, which indicate economic, social, and political impacts of the Web.
IPv4 addresses
The Carna Botnet was a botnet of 420,000 devices created by hackers to measure the extent of the Internet in what the creators called the "Internet Census of 2012".
Languages
Censorship and surveillance
See also
A4AI: affordability threshold
Digital rights
Internet access
Internet traffic
List of sovereign states by Internet connection speeds
List of countries by number of mobile phones in use
List of social networking services
Zettabyte Era
References
External links
"ICT Data and Statistics", International Telecommunication Union (ITU).
Internet Live Stats, Real Time Statistics Project.
Internet World Stats: Usage and Population Statistics, Miniwatts Marketing Group.
"40 maps that explain the internet", Timothy B. Lee, Vox Media, 2 June 2014.
"Information Geographies", Oxford Internet Institute.
"Internet Monitor", a research project of the Berkman Center for Internet & Society at Harvard University to evaluate, describe, and summarize the means, mechanisms, and extent of Internet access, content controls and activity around the world.
Internet
Digital divide
International telecommunications
World | Global Internet usage | Technology | 514 |
25,366,391 | https://en.wikipedia.org/wiki/J.%20Karl%20Hedrick | J. Karl Hedrick (August 26, 1944 – February 22, 2017) was an American control theorist and a professor in the Department of Mechanical Engineering at the University of California, Berkeley. He made seminal contributions in nonlinear control and estimation. Prior to joining the faculty at the University of California, Berkeley he was a professor at the Massachusetts Institute of Technology from 1974 to 1988. Hedrick received a bachelor's degree in Engineering Mechanics from the University of Michigan (1966) and a M.S. and Ph.D. from Stanford University (1970, 1971).
Hedrick was the head of the Vehicle Dynamics and Control Laboratory at UC Berkeley. He led Partners for Advanced Transit and Highways Research Center (1997–2003), which conducts research in advanced vehicle control systems, advanced traffic management and information systems, and technology leading to an automated highway systems.
He wrote two books and published more than 140 peer-reviewed archival publications, and graduated over 70 Ph.D. students in his career at MIT, Arizona State, and Berkeley.
He was awarded the Rufus Oldenburger Medal from the American Society of Mechanical Engineers in 2006 and was elected to the National Academy of Engineering in 2014.
References
External links
Personal Website
1944 births
2017 deaths
Control theorists
UC Berkeley College of Engineering faculty
Stanford University alumni
Massachusetts Institute of Technology faculty
University of Michigan College of Engineering alumni
American mechanical engineers
Members of the United States National Academy of Engineering
Fellows of the American Society of Mechanical Engineers | J. Karl Hedrick | Engineering | 291 |
3,188,240 | https://en.wikipedia.org/wiki/Radiant%20exitance | In radiometry, radiant exitance or radiant emittance is the radiant flux emitted by a surface per unit area, whereas spectral exitance or spectral emittance is the radiant exitance of a surface per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. This is the emitted component of radiosity. The SI unit of radiant exitance is the watt per square metre (), while that of spectral exitance in frequency is the watt per square metre per hertz (W·m−2·Hz−1) and that of spectral exitance in wavelength is the watt per square metre per metre (W·m−3)—commonly the watt per square metre per nanometre (). The CGS unit erg per square centimeter per second () is often used in astronomy. Radiant exitance is often called "intensity" in branches of physics other than radiometry, but in radiometry this usage leads to confusion with radiant intensity.
Mathematical definitions
Radiant exitance
Radiant exitance of a surface, denoted ("e" for "energetic", to avoid confusion with photometric quantities), is defined as
where
is the partial derivative symbol,
is the radiant flux emitted, and
is the surface area.
The radiant flux received by a surface is called irradiance.
The radiant exitance of a black surface, according to the Stefan–Boltzmann law, is equal to:
where is the Stefan–Boltzmann constant, and is the temperature of that surface.
For a real surface, the radiant exitance is equal to:
where is the emissivity of that surface.
Spectral exitance
Spectral exitance in frequency of a surface, denoted Me,ν, is defined as
where is the frequency.
Spectral exitance in wavelength of a surface, denoted Me,λ, is defined as
where is the wavelength.
The spectral exitance of a black surface around a given frequency or wavelength, according to the Lambert's cosine law and the Planck's law, is equal to:
where
is the Planck constant,
is the frequency,
is the wavelength,
is the Boltzmann constant,
is the speed of light in the medium,
is the temperature of that surface.
For a real surface, the spectral exitance is equal to:
SI radiometry units
See also
Radiosity
References
Physical quantities
Radiometry | Radiant exitance | Physics,Mathematics,Engineering | 481 |
17,583,286 | https://en.wikipedia.org/wiki/Farmers%20Bank%20Building%20%28Pittsburgh%29 | The Farmers Bank Building was a 27-story, skyscraper in Pittsburgh, Pennsylvania completed in 1902 and demolished on May 25, 1997. The University of Pittsburgh's online digital library states the building was constructed in 1903 and had 24 stories. To a generation of Pittsburgh sports fans the building is well remembered for being resurfaced in the mid 1960s in a failed rehabilitation but also fondly for a 15 story high mural of Roberto Clemente, Bill Mazeroski, Jack Lambert, Mean Joe Greene and Mario Lemieux completed in 1992 by Judy Penzer, who was killed in the crash of TWA Flight 800 four years later. For the five years the mural existed it was often the centerpiece for national networks cutting to or from games while they were in town for sporting events.
Rockwell International owned the building starting in the mid-1960s and used it as its global headquarters, selling it in early 1972 and consolidating its headquarters staff in the U.S. Steel Tower blocks away.
The building was imploded by Controlled Demolition, Inc. on the afternoon of May 25, 1997. In its place, a low-rise department store named Lazarus was built on the site. That building has since been extensively redesigned and now operates as a condominium development named Piatt Place.
See also
List of tallest buildings in Pittsburgh
References
1902 establishments in Pennsylvania
1997 disestablishments in Pennsylvania
Office buildings completed in 1902
Buildings and structures demolished in 1997
Skyscraper office buildings in Pittsburgh
Demolished buildings and structures in Pittsburgh
Former skyscrapers
Buildings and structures demolished by controlled implosion | Farmers Bank Building (Pittsburgh) | Engineering | 317 |
44,412 | https://en.wikipedia.org/wiki/Sedimentary%20rock | Sedimentary rocks are types of rock that are formed by the accumulation or deposition of sediments, ie. mineral or organic particles, at Earth's surface, followed by cementation. Sedimentation is the collective name for processes that cause these particles to settle in place. The particles that form a sedimentary rock are called sediment, and may be composed of geological detritus (minerals) or biological detritus (organic matter). The geological detritus originated from weathering and erosion of existing rocks, or from the solidification of molten lava blobs erupted by volcanoes. The geological detritus is transported to the place of deposition by water, wind, ice or mass movement, which are called agents of denudation. Biological detritus was formed by bodies and parts (mainly shells) of dead aquatic organisms, as well as their fecal mass, suspended in water and slowly piling up on the floor of water bodies (marine snow). Sedimentation may also occur as dissolved minerals precipitate from water solution.
The sedimentary rock cover of the continents of the Earth's crust is extensive (73% of the Earth's current land surface), but sedimentary rock is estimated to be only 8% of the volume of the crust. Sedimentary rocks are only a thin veneer over a crust consisting mainly of igneous and metamorphic rocks. Sedimentary rocks are deposited in layers as strata, forming a structure called bedding. Sedimentary rocks are often deposited in large structures called sedimentary basins. Sedimentary rocks have also been found on Mars.
The study of sedimentary rocks and rock strata provides information about the subsurface that is useful for civil engineering, for example in the construction of roads, houses, tunnels, canals or other structures. Sedimentary rocks are also important sources of natural resources including coal, fossil fuels, drinking water and ores.
The study of the sequence of sedimentary rock strata is the main source for an understanding of the Earth's history, including palaeogeography, paleoclimatology and the history of life. The scientific discipline that studies the properties and origin of sedimentary rocks is called sedimentology. Sedimentology is part of both geology and physical geography and overlaps partly with other disciplines in the Earth sciences, such as pedology, geomorphology, geochemistry and structural geology.
Classification based on origin
Sedimentary rocks can be subdivided into four groups based on the processes responsible for their formation: clastic sedimentary rocks, biochemical (biogenic) sedimentary rocks, chemical sedimentary rocks, and a fourth category for "other" sedimentary rocks formed by impacts, volcanism, and other minor processes.
Clastic sedimentary rocks
Clastic sedimentary rocks are composed of rock fragments (clasts) that have been cemented together. The clasts are commonly individual grains of quartz, feldspar, clay minerals, or mica. However, any type of mineral may be present. Clasts may also be lithic fragments composed of more than one mineral.
Clastic sedimentary rocks are subdivided according to the dominant particle size. Most geologists use the Udden-Wentworth grain size scale and divide unconsolidated sediment into three fractions: gravel (>2 mm diameter), sand (1/16 to 2 mm diameter), and mud (<1/16 mm diameter). Mud is further divided into silt (1/16 to 1/256 mm diameter) and clay (<1/256 mm diameter). The classification of clastic sedimentary rocks parallels this scheme; conglomerates and breccias are made mostly of gravel, sandstones are made mostly of sand, and mudrocks are made mostly of mud. This tripartite subdivision is mirrored by the broad categories of rudites, arenites, and lutites, respectively, in older literature.
The subdivision of these three broad categories is based on differences in clast shape (conglomerates and breccias), composition (sandstones), or grain size or texture (mudrocks).
Conglomerates and breccias
Breccias are dominantly composed of angular gravel in a groundmass (matrix), while conglomerates are dominantly composed rounded gravel.
Sandstones
Sandstone classification schemes vary widely, but most geologists have adopted the Dott scheme, which uses the relative abundance of quartz, feldspar, and lithic framework grains and the abundance of a muddy matrix between the larger grains.
Composition of framework grains
The relative abundance of sand-sized framework grains determines the first word in a sandstone name. Naming depends on the dominance of the three most abundant components quartz, feldspar, or the lithic fragments that originated from other rocks. All other minerals are considered accessories and not used in the naming of the rock, regardless of abundance.
Quartz sandstones have >90% quartz grains
Feldspathic sandstones have <90% quartz grains and more feldspar grains than lithic grains
Lithic sandstones have <90% quartz grains and more lithic grains than feldspar grains
Abundance of muddy matrix material between sand grains
When sand-sized particles are deposited, the space between the grains either remains open or is filled with mud (silt and/or clay sized particle).
"Clean" sandstones with open pore space (that may later be filled with matrix material) are called arenites.
Muddy sandstones with abundant (>10%) muddy matrix are called wackes.
Six sandstone names are possible using the descriptors for grain composition (quartz-, feldspathic-, and lithic-) and the amount of matrix (wacke or arenite). For example, a quartz arenite would be composed of mostly (>90%) quartz grains and have little or no clayey matrix between the grains, a lithic wacke would have abundant lithic grains and abundant muddy matrix, etc.
Although the Dott classification scheme is widely used by sedimentologists, common names like greywacke, arkose, and quartz sandstone are still widely used by non-specialists and in popular literature.
Mudrocks
Mudrocks are sedimentary rocks composed of at least 50% silt- and clay-sized particles. These relatively fine-grained particles are commonly transported by turbulent flow in water or air, and deposited as the flow calms and the particles settle out of suspension.
Most authors presently use the term "mudrock" to refer to all rocks composed dominantly of mud. Mudrocks can be divided into siltstones, composed dominantly of silt-sized particles; mudstones with subequal mixture of silt- and clay-sized particles; and claystones, composed mostly of clay-sized particles. Most authors use "shale" as a term for a fissile mudrock (regardless of grain size) although some older literature uses the term "shale" as a synonym for mudrock.
Biochemical sedimentary rocks
Biochemical sedimentary rocks are created when organisms use materials dissolved in air or water to build their tissue. Examples include:
Most types of limestone are formed from the calcareous skeletons of organisms such as corals, mollusks, and foraminifera.
Coal, formed from vegetation that has removed carbon from the atmosphere and combined it with other elements to build their tissue, this vegetation gets compressed by overlying sediments and undergoes chemical transformation.
Deposits of chert formed from the accumulation of siliceous skeletons of microscopic organisms such as radiolaria and diatoms.
Chemical sedimentary rocks
Chemical sedimentary rock forms when mineral constituents in solution become supersaturated and inorganically precipitate. Common chemical sedimentary rocks include oolitic limestone and rocks composed of evaporite minerals, such as halite (rock salt), sylvite, baryte and gypsum.
Other sedimentary rocks
This fourth miscellaneous category includes volcanic tuff and volcanic breccias formed by deposition and later cementation of lava fragments erupted by volcanoes, and impact breccias formed after impact events.
Classification based on composition
Alternatively, sedimentary rocks can be subdivided into compositional groups based on their mineralogy:
Siliciclastic sedimentary rocks, are dominantly composed of silicate minerals. The sediment that makes up these rocks was transported as bed load, suspended load, or by sediment gravity flows. Siliciclastic sedimentary rocks are subdivided into conglomerates and breccias, sandstone, and mudrocks.
Carbonate sedimentary rocks are composed of calcite (rhombohedral ), aragonite (orthorhombic ), dolomite (), and other carbonate minerals based on the ion. Common examples include limestone and the rock dolomite.
Evaporite sedimentary rocks are composed of minerals formed from the evaporation of water. The most common evaporite minerals are carbonates (calcite and others based on ), chlorides (halite and others built on ), and sulfates (gypsum and others built on ). Evaporite rocks commonly include abundant halite (rock salt), gypsum, and anhydrite.
Organic-rich sedimentary rocks have significant amounts of organic material, generally in excess of 3% total organic carbon. Common examples include coal, oil shale as well as source rocks for oil and natural gas.
Siliceous sedimentary rocks are almost entirely composed of silica (), typically as chert, opal, chalcedony or other microcrystalline forms.
Iron-rich sedimentary rocks are composed of >15% iron; the most common forms are banded iron formations and ironstones.
Phosphatic sedimentary rocks are composed of phosphate minerals and contain more than 6.5% phosphorus; examples include deposits of phosphate nodules, bone beds, and phosphatic mudrocks.
Deposition and transformation
Sediment transport and deposition
Sedimentary rocks are formed when sediment is deposited out of air, ice, wind, gravity, or water flows carrying the particles in suspension. This sediment is often formed when weathering and erosion break down a rock into loose material in a source area. The material is then transported from the source area to the deposition area. The type of sediment transported depends on the geology of the hinterland (the source area of the sediment). However, some sedimentary rocks, such as evaporites, are composed of material that form at the place of deposition. The nature of a sedimentary rock, therefore, not only depends on the sediment supply, but also on the sedimentary depositional environment in which it formed.
Transformation (Diagenesis)
As sediments accumulate in a depositional environment, older sediments are buried by younger sediments, and they undergo diagenesis. Diagenesis includes all the chemical, physical, and biological changes, exclusive of surface weathering, undergone by a sediment after its initial deposition. This includes compaction and lithification of the sediments. Early stages of diagenesis, described as eogenesis, take place at shallow depths (a few tens of meters) and is characterized by bioturbation and mineralogical changes in the sediments, with only slight compaction. The red hematite that gives red bed sandstones their color is likely formed during eogenesis. Some biochemical processes, like the activity of bacteria, can affect minerals in a rock and are therefore seen as part of diagenesis.
Deeper burial is accompanied by mesogenesis, during which most of the compaction and lithification takes place. Compaction takes place as the sediments come under increasing overburden (lithostatic) pressure from overlying sediments. Sediment grains move into more compact arrangements, grains of ductile minerals (such as mica) are deformed, and pore space is reduced. Sediments are typically saturated with groundwater or seawater when originally deposited, and as pore space is reduced, much of these connate fluids are expelled. In addition to this physical compaction, chemical compaction may take place via pressure solution. Points of contact between grains are under the greatest strain, and the strained mineral is more soluble than the rest of the grain. As a result, the contact points are dissolved away, allowing the grains to come into closer contact. The increased pressure and temperature stimulate further chemical reactions, such as the reactions by which organic material becomes lignite or coal.
Lithification follows closely on compaction, as increased temperatures at depth hasten the precipitation of cement that binds the grains together. Pressure solution contributes to this process of cementation, as the mineral dissolved from strained contact points is redeposited in the unstrained pore spaces. This further reduces porosity and makes the rock more compact and competent.
Unroofing of buried sedimentary rock is accompanied by telogenesis, the third and final stage of diagenesis. As erosion reduces the depth of burial, renewed exposure to meteoric water produces additional changes to the sedimentary rock, such as leaching of some of the cement to produce secondary porosity.
At sufficiently high temperature and pressure, the realm of diagenesis makes way for metamorphism, the process that forms metamorphic rock.
Properties
Color
The color of a sedimentary rock is often mostly determined by iron, an element with two major oxides: iron(II) oxide and iron(III) oxide. Iron(II) oxide (FeO) only forms under low oxygen (anoxic) circumstances and gives the rock a grey or greenish colour. Iron(III) oxide (Fe2O3) in a richer oxygen environment is often found in the form of the mineral hematite and gives the rock a reddish to brownish colour. In arid continental climates rocks are in direct contact with the atmosphere, and oxidation is an important process, giving the rock a red or orange colour. Thick sequences of red sedimentary rocks formed in arid climates are called red beds. However, a red colour does not necessarily mean the rock formed in a continental environment or arid climate.
The presence of organic material can colour a rock black or grey. Organic material is formed from dead organisms, mostly plants. Normally, such material eventually decays by oxidation or bacterial activity. Under anoxic circumstances, however, organic material cannot decay and leaves a dark sediment, rich in organic material. This can, for example, occur at the bottom of deep seas and lakes. There is little water mixing in such environments; as a result, oxygen from surface water is not brought down, and the deposited sediment is normally a fine dark clay. Dark rocks, rich in organic material, are therefore often shales.
Texture
The size, form and orientation of clasts (the original pieces of rock) in a sediment is called its texture. The texture is a small-scale property of a rock, but determines many of its large-scale properties, such as the density, porosity or permeability.
The 3D orientation of the clasts is called the fabric of the rock. The size and form of clasts can be used to determine the velocity and direction of current in the sedimentary environment that moved the clasts from their origin; fine, calcareous mud only settles in quiet water while gravel and larger clasts are moved only by rapidly moving water. The grain size of a rock is usually expressed with the Wentworth scale, though alternative scales are sometimes used. The grain size can be expressed as a diameter or a volume, and is always an average value, since a rock is composed of clasts with different sizes. The statistical distribution of grain sizes is different for different rock types and is described in a property called the sorting of the rock. When all clasts are more or less of the same size, the rock is called 'well-sorted', and when there is a large spread in grain size, the rock is called 'poorly sorted'.
The form of the clasts can reflect the origin of the rock. For example, coquina, a rock composed of clasts of broken shells, can only form in energetic water. The form of a clast can be described by using four parameters:
Surface texture describes the amount of small-scale relief of the surface of a grain that is too small to influence the general shape. For example, frosted grains, which are covered with small-scale fractures, are characteristic of eolian sandstones.
Rounding describes the general smoothness of the shape of a grain.
Sphericity describes the degree to which the grain approaches a sphere.
Grain form describes the three-dimensional shape of the grain.
Chemical sedimentary rocks have a non-clastic texture, consisting entirely of crystals. To describe such a texture, only the average size of the crystals and the fabric are necessary.
Mineralogy
Most sedimentary rocks contain either quartz (siliciclastic rocks) or calcite (carbonate rocks). In contrast to igneous and metamorphic rocks, a sedimentary rock usually contains very few different major minerals. However, the origin of the minerals in a sedimentary rock is often more complex than in an igneous rock. Minerals in a sedimentary rock may have been present in the original sediments or may formed by precipitation during diagenesis. In the second case, a mineral precipitate may have grown over an older generation of cement. A complex diagenetic history can be established by optical mineralogy, using a petrographic microscope.
Carbonate rocks predominantly consist of carbonate minerals such as calcite, aragonite or dolomite. Both the cement and the clasts (including fossils and ooids) of a carbonate sedimentary rock usually consist of carbonate minerals. The mineralogy of a clastic rock is determined by the material supplied by the source area, the manner of its transport to the place of deposition and the stability of that particular mineral.
The resistance of rock-forming minerals to weathering is expressed by the Goldich dissolution series. In this series, quartz is the most stable, followed by feldspar, micas, and finally other less stable minerals that are only present when little weathering has occurred. The amount of weathering depends mainly on the distance to the source area, the local climate and the time it took for the sediment to be transported to the point where it is deposited. In most sedimentary rocks, mica, feldspar and less stable minerals have been weathered to clay minerals like kaolinite, illite or smectite.
Fossils
Among the three major types of rock, fossils are most commonly found in sedimentary rock. Unlike most igneous and metamorphic rocks, sedimentary rocks form at temperatures and pressures that do not destroy fossil remnants. Often these fossils may only be visible under magnification.
Dead organisms in nature are usually quickly removed by scavengers, bacteria, rotting and erosion, but under exceptional circumstances, these natural processes are unable to take place, leading to fossilisation. The chance of fossilisation is higher when the sedimentation rate is high (so that a carcass is quickly buried), in anoxic environments (where little bacterial activity occurs) or when the organism had a particularly hard skeleton. Larger, well-preserved fossils are relatively rare.
Fossils can be both the direct remains or imprints of organisms and their skeletons. Most commonly preserved are the harder parts of organisms such as bones, shells, and the woody tissue of plants. Soft tissue has a much smaller chance of being fossilized, and the preservation of soft tissue of animals older than 40 million years is very rare. Imprints of organisms made while they were still alive are called trace fossils, examples of which are burrows, footprints, etc.
As a part of a sedimentary rock, fossils undergo the same diagenetic processes as does the host rock. For example, a shell consisting of calcite can dissolve while a cement of silica then fills the cavity. In the same way, precipitating minerals can fill cavities formerly occupied by blood vessels, vascular tissue or other soft tissues. This preserves the form of the organism but changes the chemical composition, a process called permineralization. The most common minerals involved in permineralization are various forms of amorphous silica (chalcedony, flint, chert), carbonates (especially calcite), and pyrite.
At high pressure and temperature, the organic material of a dead organism undergoes chemical reactions in which volatiles such as water and carbon dioxide are expulsed. The fossil, in the end, consists of a thin layer of pure carbon or its mineralized form, graphite. This form of fossilisation is called carbonisation. It is particularly important for plant fossils. The same process is responsible for the formation of fossil fuels like lignite or coal.
Primary sedimentary structures
Structures in sedimentary rocks can be divided into primary structures (formed during deposition) and secondary structures (formed after deposition). Unlike textures, structures are always large-scale features that can easily be studied in the field. Sedimentary structures can indicate something about the sedimentary environment or can serve to tell which side originally faced up where tectonics have tilted or overturned sedimentary layers.
Sedimentary rocks are laid down in layers called beds or strata. A bed is defined as a layer of rock that has a uniform lithology and texture. Beds form by the deposition of layers of sediment on top of each other. The sequence of beds that characterizes sedimentary rocks is called bedding. Single beds can be a couple of centimetres to several meters thick. Finer, less pronounced layers are called laminae, and the structure a lamina forms in a rock is called lamination. Laminae are usually less than a few centimetres thick. Though bedding and lamination are often originally horizontal in nature, this is not always the case. In some environments, beds are deposited at a (usually small) angle. Sometimes multiple sets of layers with different orientations exist in the same rock, a structure called cross-bedding. Cross-bedding is characteristic of deposition by a flowing medium (wind or water).
The opposite of cross-bedding is parallel lamination, where all sedimentary layering is parallel. Differences in laminations are generally caused by cyclic changes in the sediment supply, caused, for example, by seasonal changes in rainfall, temperature or biochemical activity. Laminae that represent seasonal changes (similar to tree rings) are called varves. Any sedimentary rock composed of millimeter or finer scale layers can be named with the general term laminite. When sedimentary rocks have no lamination at all, their structural character is called massive bedding.
Graded bedding is a structure where beds with a smaller grain size occur on top of beds with larger grains. This structure forms when fast flowing water stops flowing. Larger, heavier clasts in suspension settle first, then smaller clasts. Although graded bedding can form in many different environments, it is a characteristic of turbidity currents.
The surface of a particular bed, called the bedform, can also be indicative of a particular sedimentary environment. Examples of bed forms include dunes and ripple marks. Sole markings, such as tool marks and flute casts, are grooves eroded on a surface that are preserved by renewed sedimentation. These are often elongated structures and can be used to establish the direction of the flow during deposition.
Ripple marks also form in flowing water. There can be symmetric or asymmetric. Asymmetric ripples form in environments where the current is in one direction, such as rivers. The longer flank of such ripples is on the upstream side of the current. Symmetric wave ripples occur in environments where currents reverse directions, such as tidal flats.
Mudcracks are a bed form caused by the dehydration of sediment that occasionally comes above the water surface. Such structures are commonly found at tidal flats or point bars along rivers.
Secondary sedimentary structures
Secondary sedimentary structures are those which formed after deposition. Such structures form by chemical, physical and biological processes within the sediment. They can be indicators of circumstances after deposition. Some can be used as way up criteria.
Organic materials in a sediment can leave more traces than just fossils. Preserved tracks and burrows are examples of trace fossils (also called ichnofossils). Such traces are relatively rare. Most trace fossils are burrows of molluscs or arthropods. This burrowing is called bioturbation by sedimentologists. It can be a valuable indicator of the biological and ecological environment that existed after the sediment was deposited. On the other hand, the burrowing activity of organisms can destroy other (primary) structures in the sediment, making a reconstruction more difficult.
Secondary structures can also form by diagenesis or the formation of a soil (pedogenesis) when a sediment is exposed above the water level. An example of a diagenetic structure common in carbonate rocks is a stylolite. Stylolites are irregular planes where material was dissolved into the pore fluids in the rock. This can result in the precipitation of a certain chemical species producing colouring and staining of the rock, or the formation of concretions. Concretions are roughly concentric bodies with a different composition from the host rock. Their formation can be the result of localized precipitation due to small differences in composition or porosity of the host rock, such as around fossils, inside burrows or around plant roots. In carbonate rocks such as limestone or chalk, chert or flint concretions are common, while terrestrial sandstones sometimes contain iron concretions. Calcite concretions in clay containing angular cavities or cracks are called septarian concretions.
After deposition, physical processes can deform the sediment, producing a third class of secondary structures. Density contrasts between different sedimentary layers, such as between sand and clay, can result in flame structures or load casts, formed by inverted diapirism. While the clastic bed is still fluid, diapirism can cause a denser upper layer to sink into a lower layer. Sometimes, density contrasts occur or are enhanced when one of the lithologies dehydrates. Clay can be easily compressed as a result of dehydration, while sand retains the same volume and becomes relatively less dense. On the other hand, when the pore fluid pressure in a sand layer surpasses a critical point, the sand can break through overlying clay layers and flow through, forming discordant bodies of sedimentary rock called sedimentary dykes. The same process can form mud volcanoes on the surface where they broke through upper layers.
Sedimentary dykes can also be formed in a cold climate where the soil is permanently frozen during a large part of the year. Frost weathering can form cracks in the soil that fill with rubble from above. Such structures can be used as climate indicators as well as way up structures.
Density contrasts can also cause small-scale faulting, even while sedimentation progresses (synchronous-sedimentary faulting). Such faulting can also occur when large masses of non-lithified sediment are deposited on a slope, such as at the front side of a delta or the continental slope. Instabilities in such sediments can result in the deposited material to slump, producing fissures and folding. The resulting structures in the rock are syn-sedimentary folds and faults, which can be difficult to distinguish from folds and faults formed by tectonic forces acting on lithified rocks.
Depositional environments
The setting in which a sedimentary rock forms is called the depositional environment. Every environment has a characteristic combination of geologic processes, and circumstances. The type of sediment that is deposited is not only dependent on the sediment that is transported to a place (provenance), but also on the environment itself.
A marine environment means that the rock was formed in a sea or ocean. Often, a distinction is made between deep and shallow marine environments. Deep marine usually refers to environments more than 200 m below the water surface (including the abyssal plain). Shallow marine environments exist adjacent to coastlines and can extend to the boundaries of the continental shelf. The water movements in such environments have a generally higher energy than that in deep environments, as wave activity diminishes with depth. This means that coarser sediment particles can be transported and the deposited sediment can be coarser than in deeper environments. When the sediment is transported from the continent, an alternation of sand, clay and silt is deposited. When the continent is far away, the amount of such sediment deposited may be small, and biochemical processes dominate the type of rock that forms. Especially in warm climates, shallow marine environments far offshore mainly see deposition of carbonate rocks. The shallow, warm water is an ideal habitat for many small organisms that build carbonate skeletons. When these organisms die, their skeletons sink to the bottom, forming a thick layer of calcareous mud that may lithify into limestone. Warm shallow marine environments also are ideal environments for coral reefs, where the sediment consists mainly of the calcareous skeletons of larger organisms.
In deep marine environments, the water current working the sea bottom is small. Only fine particles can be transported to such places. Typically sediments depositing on the ocean floor are fine clay or small skeletons of micro-organisms. At 4 km depth, the solubility of carbonates increases dramatically (the depth zone where this happens is called the lysocline). Calcareous sediment that sinks below the lysocline dissolves; as a result, no limestone can be formed below this depth. Skeletons of micro-organisms formed of silica (such as radiolarians) are not as soluble and are still deposited. An example of a rock formed of silica skeletons is radiolarite. When the bottom of the sea has a small inclination, for example, at the continental slopes, the sedimentary cover can become unstable, causing turbidity currents. Turbidity currents are sudden disturbances of the normally quiet deep marine environment and can cause the near-instantaneous deposition of large amounts of sediment, such as sand and silt. The rock sequence formed by a turbidity current is called a turbidite.
The coast is an environment dominated by wave action. At a beach, dominantly denser sediment such as sand or gravel, often mingled with shell fragments, is deposited, while the silt and clay sized material is kept in mechanical suspension. Tidal flats and shoals are places that sometimes dry because of the tide. They are often cross-cut by gullies, where the current is strong and the grain size of the deposited sediment is larger. Where rivers enter the body of water, either on a sea or lake coast, deltas can form. These are large accumulations of sediment transported from the continent to places in front of the mouth of the river. Deltas are dominantly composed of clastic (rather than chemical) sediment.
A continental sedimentary environment is an environment in the interior of a continent. Examples of continental environments are lagoons, lakes, swamps, floodplains and alluvial fans. In the quiet water of swamps, lakes and lagoons, fine sediment is deposited, mingled with organic material from dead plants and animals. In rivers, the energy of the water is much greater and can transport heavier clastic material. Besides transport by water, sediment can be transported by wind or glaciers. Sediment transported by wind is called aeolian and is almost always very well sorted, while sediment transported by a glacier is called glacial till and is characterized by very poor sorting.
Aeolian deposits can be quite striking. The depositional environment of the Touchet Formation, located in the Northwestern United States, had intervening periods of aridity which resulted in a series of rhythmite layers. Erosional cracks were later infilled with layers of soil material, especially from aeolian processes. The infilled sections formed vertical inclusions in the horizontally deposited layers, and thus provided evidence of the sequence of events during deposition of the forty-one layers of the formation.
Sedimentary facies
The kind of rock formed in a particular depositional environment is called its sedimentary facies. Sedimentary environments usually exist alongside each other in certain natural successions. A beach, where sand and gravel is deposited, is usually bounded by a deeper marine environment a little offshore, where finer sediments are deposited at the same time. Behind the beach, there can be dunes (where the dominant deposition is well sorted sand) or a lagoon (where fine clay and organic material is deposited). Every sedimentary environment has its own characteristic deposits. When sedimentary strata accumulate through time, the environment can shift, forming a change in facies in the subsurface at one location. On the other hand, when a rock layer with a certain age is followed laterally, the lithology (the type of rock) and facies eventually change.
Facies can be distinguished in a number of ways: the most common are by the lithology (for example: limestone, siltstone or sandstone) or by fossil content. Coral, for example, only lives in warm and shallow marine environments and fossils of coral are thus typical for shallow marine facies. Facies determined by lithology are called lithofacies; facies determined by fossils are biofacies.
Sedimentary environments can shift their geographical positions through time. Coastlines can shift in the direction of the sea when the sea level drops (regression), when the surface rises (transgression) due to tectonic forces in the Earth's crust or when a river forms a large delta. In the subsurface, such geographic shifts of sedimentary environments of the past are recorded in shifts in sedimentary facies. This means that sedimentary facies can change either parallel or perpendicular to an imaginary layer of rock with a fixed age, a phenomenon described by Walther's Law.
The situation in which coastlines move in the direction of the continent is called transgression. In the case of transgression, deeper marine facies are deposited over shallower facies, a succession called onlap. Regression is the situation in which a coastline moves in the direction of the sea. With regression, shallower facies are deposited on top of deeper facies, a situation called offlap.
The facies of all rocks of a certain age can be plotted on a map to give an overview of the palaeogeography. A sequence of maps for different ages can give an insight in the development of the regional geography.
Gallery of sedimentary facies
Sedimentary basins
Places where large-scale sedimentation takes place are called sedimentary basins. The amount of sediment that can be deposited in a basin depends on the depth of the basin, the so-called accommodation space. The depth, shape and size of a basin depend on tectonics, movements within the Earth's lithosphere. Where the lithosphere moves upward (tectonic uplift), land eventually rises above sea level and the area becomes a source for new sediment as erosion removes material. Where the lithosphere moves downward (tectonic subsidence), a basin forms and sediments are deposited.
A type of basin formed by the moving apart of two pieces of a continent is called a rift basin. Rift basins are elongated, narrow and deep basins. Due to divergent movement, the lithosphere is stretched and thinned, so that the hot asthenosphere rises and heats the overlying rift basin. Apart from continental sediments, rift basins normally also have part of their infill consisting of volcanic deposits. When the basin grows due to continued stretching of the lithosphere, the rift grows and the sea can enter, forming marine deposits.
When a piece of lithosphere that was heated and stretched cools again, its density rises, causing isostatic subsidence. If this subsidence continues long enough, the basin is called a sag basin. Examples of sag basins are the regions along passive continental margins, but sag basins can also be found in the interior of continents. In sag basins, the extra weight of the newly deposited sediments is enough to keep the subsidence going in a vicious circle. The total thickness of the sedimentary infill in a sag basin can thus exceed 10 km.
A third type of basin exists along convergent plate boundaries – places where one tectonic plate moves under another into the asthenosphere. The subducting plate bends and forms a fore-arc basin in front of the overriding plate – an elongated, deep asymmetric basin. Fore-arc basins are filled with deep marine deposits and thick sequences of turbidites. Such infill is called flysch. When the convergent movement of the two plates results in continental collision, the basin becomes shallower and develops into a foreland basin. At the same time, tectonic uplift forms a mountain belt in the overriding plate, from which large amounts of material are eroded and transported to the basin. Such erosional material of a growing mountain chain is called molasse and has either a shallow marine or a continental facies.
At the same time, the growing weight of the mountain belt can cause isostatic subsidence in the area of the overriding plate on the other side to the mountain belt. The basin type resulting from this subsidence is called a back-arc basin and is usually filled by shallow marine deposits and molasse.
Influence of astronomical cycles
In many cases facies changes and other lithological features in sequences of sedimentary rock have a cyclic nature. This cyclic nature was caused by cyclic changes in sediment supply and the sedimentary environment. Most of these cyclic changes are caused by astronomic cycles. Short astronomic cycles can be the difference between the tides or the spring tide every two weeks. On a larger time-scale, cyclic changes in climate and sea level are caused by Milankovitch cycles: cyclic changes in the orientation and/or position of the Earth's rotational axis and orbit around the Sun. There are a number of Milankovitch cycles known, lasting between 10,000 and 200,000 years.
Relatively small changes in the orientation of the Earth's axis or length of the seasons can be a major influence on the Earth's climate. An example are the ice ages of the past 2.6 million years (the Quaternary period), which are assumed to have been caused by astronomic cycles. Climate change can influence the global sea level (and thus the amount of accommodation space in sedimentary basins) and sediment supply from a certain region. Eventually, small changes in astronomic parameters can cause large changes in sedimentary environment and sedimentation.
Sedimentation rates
The rate at which sediment is deposited differs depending on the location. A channel in a tidal flat can see the deposition of a few metres of sediment in one day, while on the deep ocean floor each year only a few millimetres of sediment accumulate. A distinction can be made between normal sedimentation and sedimentation caused by catastrophic processes. The latter category includes all kinds of sudden exceptional processes like mass movements, rock slides or flooding. Catastrophic processes can see the sudden deposition of a large amount of sediment at once. In some sedimentary environments, most of the total column of sedimentary rock was formed by catastrophic processes, even though the environment is usually a quiet place. Other sedimentary environments are dominated by normal, ongoing sedimentation.
In many cases, sedimentation occurs slowly. In a desert, for example, the wind deposits siliciclastic material (sand or silt) in some spots, or catastrophic flooding of a wadi may cause sudden deposits of large quantities of detrital material, but in most places eolian erosion dominates. The amount of sedimentary rock that forms is not only dependent on the amount of supplied material, but also on how well the material consolidates. Erosion removes most deposited sediment shortly after deposition.
Stratigraphy
Sedimentary rock are laid down in layers called beds or strata, each layer is horizontally laid down over the older ones and new layers are above older layers as stated in the principle of superposition. There are usually some gaps in the sequence called unconformities which represent periods where no new sediments were laid down, or when earlier sedimentary layers were raised above sea level and eroded away.
Unconformities can be classified based on the orientation of the strata on either sides of the unconformity:
Angular unconformity when the earlier layers are tilted and eroded while the later layers are horizontally laid.
Nonconformity if the early layers have no bedding in contrast to the later layers, ie. they are igneous or metamorphic rocks.
Disconformity if both the early beds and the later beds are parallel to each other.
Sedimentary rocks contain important information about the history of the Earth. They contain fossils, the preserved remains of ancient plants and animals. Coal is considered a type of sedimentary rock. The composition of sediments provides us with clues as to the original rock. Differences between successive layers indicate changes to the environment over time. Sedimentary rocks can contain fossils because, unlike most igneous and metamorphic rocks, they form at temperatures and pressures that do not destroy fossil remains.
Provenance
Provenance is the reconstruction of the origin of sediments. All rock exposed at Earth's surface is subjected to physical or chemical weathering and broken down into finer grained sediment. All three types of rocks (igneous, sedimentary and metamorphic rocks) can be the source of sedimentary detritus. The purpose of sedimentary provenance studies is to reconstruct and interpret the history of sediment from the initial parent rocks at a source area to final detritus at a burial place.
See also
References
Citations
General and cited references
External links
Basic Sedimentary Rock Classification , by Lynn S. Fichter, James Madison University, Harrisonburg.VI;
Sedimentary Rocks Tour, introduction to sedimentary rocks, by Bruce Perry, Department of Geological Sciences, California State University at Long Beach.
Petrology
Rocks
la:Sedimentum | Sedimentary rock | Physics | 8,506 |
12,328,822 | https://en.wikipedia.org/wiki/Attenuation%20coefficient | The linear attenuation coefficient, attenuation coefficient, or narrow-beam attenuation coefficient characterizes how easily a volume of material can be penetrated by a beam of light, sound, particles, or other energy or matter. A coefficient value that is large represents a beam becoming 'attenuated' as it passes through a given medium, while a small value represents that the medium had little effect on loss. The (derived) SI unit of attenuation coefficient is the reciprocal metre (m−1). Extinction coefficient is another term for this quantity, often used in meteorology and climatology. Most commonly, the quantity measures the exponential decay of intensity, that is, the value of downward e-folding distance of the original intensity as the energy of the intensity passes through a unit (e.g. one meter) thickness of material, so that an attenuation coefficient of 1 m−1 means that after passing through 1 metre, the radiation will be reduced by a factor of e, and for material with a coefficient of 2 m−1, it will be reduced twice by e, or e2. Other measures may use a different factor than e, such as the decadic attenuation coefficient below. The broad-beam attenuation coefficient counts forward-scattered radiation as transmitted rather than attenuated, and is more applicable to radiation shielding.
The mass attenuation coefficient is the attenuation coefficient normalized by the density of the material.
Overview
The attenuation coefficient describes the extent to which the radiant flux of a beam is reduced as it passes through a specific material. It is used in the context of:
X-rays or gamma rays, where it is denoted μ and measured in cm−1;
neutrons and nuclear reactors, where it is called macroscopic cross section (although actually it is not a section dimensionally speaking), denoted Σ and measured in m−1;
ultrasound attenuation, where it is denoted α and measured in dB⋅cm−1⋅MHz−1;
acoustics for characterizing particle size distribution, where it is denoted α and measured in m−1.
The attenuation coefficient is called the "extinction coefficient" in the context of
solar and infrared radiative transfer in the atmosphere, albeit usually denoted with another symbol (given the standard use of for slant paths);
A small attenuation coefficient indicates that the material in question is relatively transparent, while a larger value indicates greater degrees of opacity. The attenuation coefficient is dependent upon the type of material and the energy of the radiation. Generally, for electromagnetic radiation, the higher the energy of the incident photons and the less dense the material in question, the lower the corresponding attenuation coefficient will be.
Mathematical definitions
Attenuation coefficient
The attenuation coefficient of a volume, denoted μ, is defined as
where
Φe is the radiant flux;
z is the path length of the beam.
Note that for an attenuation coefficient which does not vary with z, this equation is solved along a line from =0 to as:
where is the incoming radiation flux at =0 and is the radiation flux at .
Spectral hemispherical attenuation coefficient
The spectral hemispherical attenuation coefficient in frequency and spectral hemispherical attenuation coefficient in wavelength of a volume, denoted μν and μλ respectively, are defined as:
where
Φe,ν is the spectral radiant flux in frequency;
Φe,λ is the spectral radiant flux in wavelength.
Directional attenuation coefficient
The directional attenuation coefficient of a volume, denoted μΩ, is defined as
where Le,Ω is the radiance.
Spectral directional attenuation coefficient
The spectral directional attenuation coefficient in frequency and spectral directional attenuation coefficient in wavelength of a volume, denoted μΩ,ν and μΩ,λ respectively, are defined as
where
Le,Ω,ν is the spectral radiance in frequency;
Le,Ω,λ is the spectral radiance in wavelength.
Absorption and scattering coefficients
When a narrow (collimated) beam passes through a volume, the beam will lose intensity due to two processes: absorption and scattering. Absorption indicates energy that is lost from the beam, while scattering indicates light that is redirected in a (random) direction, and hence is no longer in the beam, but still present, resulting in diffuse light.
The absorption coefficient of a volume, denoted μa, and the scattering coefficient of a volume, denoted μs, are defined the same way as the attenuation coefficient.
The attenuation coefficient of a volume is the sum of absorption coefficient and scattering coefficients:
Just looking at the narrow beam itself, the two processes cannot be distinguished. However, if a detector is set up to measure beam leaving in different directions, or conversely using a non-narrow beam, one can measure how much of the lost radiant flux was scattered, and how much was absorbed.
In this context, the "absorption coefficient" measures how quickly the beam would lose radiant flux due to the absorption alone, while "attenuation coefficient" measures the total loss of narrow-beam intensity, including scattering as well. "Narrow-beam attenuation coefficient" always unambiguously refers to the latter. The attenuation coefficient is at least as large as the absorption coefficient; they are equal in the idealized case of no scattering.
Expression in terms of density and cross section
The absorption coefficient may be expressed in terms of a number density of absorbing centers n and an absorbing cross section area σ. For a slab of area A and thickness dz, the total number of absorbing centers contained is n A dz. Assuming that dz is so small that there will be no overlap of the cross section areas, the total area available for absorption will be n A σ dz and the fraction of radiation absorbed is then n σ dz. The absorption coefficient is thus μ = n σ
Mass attenuation, absorption, and scattering coefficients
The mass attenuation coefficient, mass absorption coefficient, and mass scattering coefficient are defined as
where ρm is the mass density.
Napierian and decadic attenuation coefficients
Decibels
Engineering applications often express attenuation in the logarithmic units of decibels, or "dB", where 10 dB represents attenuation by a factor of 10. The units for attenuation coefficient are thus dB/m (or, in general, dB per unit distance). Note that in logarithmic units such as dB, the attenuation is a linear function of distance, rather than exponential. This has the advantage that the result of multiple attenuation layers can be found by simply adding up the dB loss for each individual passage. However, if intensity is desired, the logarithms must be converted back into linear units by using an exponential:
Naperian attenuation
The decadic attenuation coefficient or decadic narrow beam attenuation coefficient, denoted μ10, is defined as
Just as the usual attenuation coefficient measures the number of e-fold reductions that occur over a unit length of material, this coefficient measures how many 10-fold reductions occur: a decadic coefficient of 1 m−1 means 1 m of material reduces the radiation once by a factor of 10.
μ is sometimes called Napierian attenuation coefficient or Napierian narrow beam attenuation coefficient rather than just simply "attenuation coefficient". The terms "decadic" and "Napierian" come from the base used for the exponential in the Beer–Lambert law for a material sample, in which the two attenuation coefficients take part:
where
T is the transmittance of the material sample;
ℓ is the path length of the beam of light through the material sample.
In case of uniform attenuation, these relations become
Cases of non-uniform attenuation occur in atmospheric science applications and radiation shielding theory for instance.
The (Napierian) attenuation coefficient and the decadic attenuation coefficient of a material sample are related to the number densities and the amount concentrations of its N attenuating species as
where
σi is the attenuation cross section of the attenuating species i in the material sample;
ni is the number density of the attenuating species i in the material sample;
εi is the molar attenuation coefficient of the attenuating species i in the material sample;
ci is the amount concentration of the attenuating species i in the material sample,
by definition of attenuation cross section and molar attenuation coefficient.
Attenuation cross section and molar attenuation coefficient are related by
and number density and amount concentration by
where NA is the Avogadro constant.
The half-value layer (HVL) is the thickness of a layer of material required to reduce the radiant flux of the transmitted radiation to half its incident magnitude. The half-value layer is about 69% (ln 2) of the penetration depth. Engineers use these equations predict how much shielding thickness is required to attenuate radiation to acceptable or regulatory limits.
Attenuation coefficient is also inversely related to mean free path. Moreover, it is very closely related to the attenuation cross section.
Other radiometric coefficients
See also
Absorption (electromagnetic radiation)
Absorption cross section
Absorption spectrum
Acoustic attenuation
Attenuation
Attenuation length
Beer–Lambert law
Cargo scanning
Compton edge
Compton scattering
Computation of radiowave attenuation in the atmosphere
Cross section (physics)
Grey atmosphere
High-energy X-rays
Mass attenuation coefficient
Mean free path
Propagation constant
Radiation length
Scattering theory
Transmittance
References
External links
Absorption Coefficients α of Building Materials and Finishes
Sound Absorption Coefficients for Some Common Materials
Tables of X-Ray Mass Attenuation Coefficients and Mass Energy-Absorption Coefficients from 1 keV to 20 MeV for Elements Z = 1 to 92 and 48 Additional Substances of Dosimetric Interest
Physical quantities
Radiometry
Acoustics | Attenuation coefficient | Physics,Mathematics,Engineering | 2,025 |
41,014,228 | https://en.wikipedia.org/wiki/Shoes%20%282012%20film%29 | Shoes is a 2012 international short film directed, written and produced by Konstantin Fam. The film is the result of a joint effort by professional team from Russia, the USA, the Czech Republic, Poland, France, Belarus and Ukraine. The film is the first novel of the film trilogy "Witnesses" dedicated to the memory of victims of the Holocaust. It was the only nominee from Russia for the Academy Awards in the short film category in 2013.
Plot
The first installment traces the personal history of a Jewish girl in 1930s-1940s from the point of view of a pair of red shoes. Starting from the shop window where the shoes were purchased and ending at a mountain of discarded shoes of the victims in a mass grave of the Auschwitz concentration camp.
Film crew
Original idea: Dmitry Parshkov (Russia)
Director: Konstantin Fam (Russia)
Writer: Konstantin Fam (Russia)
Composer: Egor Romanenko (Ukraine)
Actors: Uliana Elina (Czech Republic), Tatiana Spyrgyash (Belarus) - Woman; Ilya Uglava (Czech Republic), Alexander Bokovets (Belarus) - Man
Producers: Konstantin Fam, Uriy Igrushа, Miсhail Bykov, Alex A. Petruhin, Tanya Dovidovskaya, Krzysztof Wiech, Tania Rakhmanova, Alexey * Timofeev, Aleksandr Kulikov, Igor Lopatonok
Cameramen: Asen Shopov (Czech Republic), Sergey Novikov (Belarus), Dzmitry Shulpin (Belarus), Otabek Djuraev (France), Marec Gajczak (Poland)
Production designer: Philip Lagunovich-Cherepko (Belarus), Jarmila Konecna (Czech Republic)
Art features
The main object in a shot is a pair of the shoes. There are no dialogues in a shot and we see no human faces but only their shoes. The film is accompanied by an original soundtrack inspired by Jewish folk motives.
Cultural effect
In 2013 the Deputy Director of Peter the Great Museum of Anthropology and Ethnography Efim Rezvan on behalf of the Museum presented the film an honorary diploma from "for bright creative contribution to the museum's exhibition program and the preservation of memory".
Together with the Department of Human Rights and the Department of Education Nuremberg plans to create an educational program for school children in Germany.
Accolades
Awards
Monaco International Film Festival (Monaco), Best Short Film, Best Director, Best Original Music, Best Producer, Best Cinematographer, Angel Peace Award
Grand Prix Video Festival Imperia (Italy)
Pitching project was held at the 65th Cannes Film Festival
The film is invited to a collection of Yad Vashem (Israel) reference number is V-6195
Radiant Angel Festival (Russia), Best Live Action Short Film Award
Artkino Festival (Russia), The best experimental film Award
1st place of the festival "Vstrechi na Vjatke" (Russia, Kirov)
Special prize magazine "NewMag", the festival "Golden Apricot" (Armenia)
Festival KONIK (Russia), the prize For the contribution to the short film development in Russia
Festival KONIK (Russia), the prize For the musical solution
Opening Film Festival special program in Haifa (Israel)
Participations
Russian Cinema Week (Israel)
Doors to Russian Cinema (USA)
Clermont-Ferrand (France)
Listapad (Belarus)
Badalona (Spain)
Coniminuticontati (Italy)
Atlanta Jewish Film Festival (USA)
St. Anne (Russia)
Kinotaur (Sochi)
Kinolikbez (Barnaul)
Kinoshock (Anapa)
Short (Kaliningrad)
Konik (Moscow) - Moscow Premiere film closing
Official partners
Federation of Jewish Communities of Russia
Documentary Film Center
Youth Center of the Russian Cinema Union
Roskino
Belarusian Ministry of Culture.
See also
Witnesses (2018 film)
Brutus (2016 film)
Violin (2017 film)
References
External links
2012 short films
2012 films
Holocaust films
2012 drama films
Russian drama films
2010s Russian-language films
War epic films
Epic films based on actual events
Rescue of Jews during the Holocaust
Russian short films
Shoes in culture | Shoes (2012 film) | Biology | 861 |
9,443,870 | https://en.wikipedia.org/wiki/Syracuse%20dish | A Syracuse dish or Syracuse watch glass is a shallow, circular, flat-bottomed dish of thick glass. Usually, it is 67 mm in outer diameter and 52 mm in inner diameter.
Background
Nathan Cobb, one of the pioneers of nematology in the United States, was the first who suggested using the syracuse dish for counting nematodes in 1918.
Uses
It is used as laboratory equipment in biology for either storage or culturing.
References
Laboratory glassware
Microbiology equipment | Syracuse dish | Biology | 97 |
31,329,979 | https://en.wikipedia.org/wiki/Conductivity%20near%20the%20percolation%20threshold | Conductivity near the percolation threshold in physics, occurs in a mixture between a dielectric and a metallic component. The conductivity and the dielectric constant of this mixture show a critical behavior if the fraction of the metallic component reaches the percolation threshold.
The behavior of the conductivity near this percolation threshold will show a smooth change over from the conductivity of the dielectric component to the conductivity of the metallic component. This behavior can be described using two critical exponents "s" and "t", whereas the dielectric constant will diverge if the threshold is approached from either side. To include the frequency dependent behavior in electronic components, a resistor-capacitor model (R-C model) is used.
Geometrical percolation
For describing such a mixture of a dielectric and a metallic component we use the model of bond-percolation.
On a regular lattice, the bond between two nearest neighbors can either be occupied with probability or not occupied with probability . There exists a critical value . For occupation probabilities an infinite cluster of the occupied bonds is formed. This value is called the percolation threshold. The region near to this percolation threshold can be described by the two critical exponents and (see Percolation critical exponents).
With these critical exponents we have the correlation length,
and the percolation probability, P:
Electrical percolation
For the description of the electrical percolation, we identify the occupied bonds of the bond-percolation model with the metallic component having a conductivity . And the dielectric component with conductivity corresponds to non-occupied bonds. We consider the two following well-known cases of a conductor-insulator mixture and a superconductor–conductor mixture.
Conductor-insulator mixture
In the case of a conductor-insulator mixture we have . This case describes the behaviour, if the percolation threshold is approached from above:
for
Below the percolation threshold we have no conductivity, because of the perfect insulator and just finite metallic clusters. The exponent t is one of the two critical exponents for electrical percolation.
Superconductor–conductor mixture
In the other well-known case of a superconductor-conductor mixture we have . This case is useful for the description below the percolation threshold:
for
Now, above the percolation threshold the conductivity becomes infinite, because of the infinite superconducting clusters. And also we get the second critical exponent s for the electrical percolation.
Conductivity near the percolation threshold
In the region around the percolation threshold, the conductivity assumes a scaling form:
with and
At the percolation threshold, the conductivity reaches the value:
with
Values for the critical exponents
In different sources there exists some different values for the critical exponents s, t and u in 3 dimensions:
Dielectric constant
The dielectric constant also shows a critical behavior near the percolation threshold. For the real part of the dielectric constant we have:
The R-C model
Within the R-C model, the bonds in the percolation model are represented by pure resistors with conductivity for the occupied bonds and by perfect capacitors with conductivity (where represents the angular frequency) for the non-occupied bonds. Now the scaling law takes the form:
This scaling law contains a purely imaginary scaling variable and a critical time scale
which diverges if the percolation threshold is approached from above as well as from below.
Conductivity for dense networks
For a dense network, the concepts of percolation are not directly applicable and the effective resistance is calculated in terms of geometrical properties of network. Assuming, edge length << electrode spacing and edges to be uniformly distributed, the potential can be considered to drop uniformly from one electrode to another.
Sheet resistance of such a random network () can be written in terms of edge (wire) density (), resistivity (), width () and thickness () of edges (wires) as:
See also
Percolation theory
References
Critical phenomena | Conductivity near the percolation threshold | Physics,Materials_science,Mathematics | 853 |
230,661 | https://en.wikipedia.org/wiki/Alec%20Issigonis | Sir Alexander Arnold Constantine Issigonis (Greek: σερ Άλεκ, Αλέξανδρος Αρνόλδος Κωνσταντίνος Ισηγόνης) (18 November 1906 – 2 October 1988) was a British-Greek automotive designer. He designed the Mini, launched by the British Motor Corporation in 1959, and voted the second most influential car of the 20th century in 1999.
Early life and education
Issigonis was born on 18 November 1906 in the Ottoman port city of Smyrna, the only child of Constantine Issigonis and Hulda Prokopp. His paternal grandfather, Demosthenis, had migrated to Smyrna from the Greek island of Paros in the 1830s and Constantine was a successful and wealthy shipbuilding engineer. His maternal ancestors originated in the Kingdom of Württemberg. It was through his mother's kinships that Issigonis was a first cousin once removed to BMW and Volkswagen director Bernd Pischetsrieder.
As British subjects, his father having naturalised whilst studying engineering in London in 1897, Issigonis and his parents were evacuated to Malta by the Royal Navy in September 1922 ahead of the Great Fire of Smyrna and the Turkish capture of Smyrna at the end of the Greco-Turkish War. His father died shortly after and Issigonis and his mother moved to the United Kingdom in 1923. Issigonis studied engineering at Battersea Polytechnic in London. Having failed his mathematics exams three times, subsequently declaring it 'the most uncreative subject you can study', Issigonis decided to enter the University of London External Programme to complete his university education.
Career
Despite the political upheavals the Issigonis family lived an affluent and comfortable life. Issigonis was maintained by his family so that he could pursue racing sport as a hobby. Issigonis went into the motor industry as an engineer and designer working for Humber Limited. He competed successfully in motor racing during the 1930s and 1940s. Starting around 1930, he raced a supercharged "Ulster" Austin Seven, later fitting it with a front axle of his own design, leading to employment at Austin. This greatly modified machine was replaced with a radical special completed in 1939, constructed of plywood laminated in aluminium sheeting. The suspension was also of advanced design, with trailing arm front suspension attached to a steel cross-member, and swing axle rear, all with rubber springs made of catapult elastic. This car was remarkably light, weighing 587 lb, of which the engine contributed 252 lb. By the time the chassis had been completed (hard labour; it was all done by hand, no power tools), Issigonis had moved to Morris Motors Limited, but Austin supplied a "works" specification supercharged side-valve engine. Issigonis usually won, even when entered in the 1100cc class if there was no 750cc category. Most events entered were sprints, but he also raced at circuits.
Morris Motors
In 1936 Issigonis was given the opportunity to work for a leading motor manufacturer as suspension designer. Morris Motors was based in Cowley near Oxford. Issigonis worked on an independent front suspension system for the Morris 10. The war prevented this design from going into production but it was later used on the MG Y-type. He worked on various projects for Morris through the war and towards its end started work on an advanced post war car codenamed Mosquito that became the Morris Minor, which was produced from 1948 until 1971.
Alvis Cars
In 1952, just as the British Motor Corporation (BMC) was formed by the merger of Morris and Austin, he moved to Alvis Cars where he designed an advanced saloon with all-aluminium V-8 engine, and experimented with interconnected independent suspension systems. This prototype was never manufactured because its cost was beyond Alvis's resources.
BMC
At the end of 1955, Issigonis was recruited back into BMC, this time into the Austin plant at Longbridge, by its chairman Sir Leonard Lord, to design a new model family of three cars. The XC (experimental car) code names assigned for the new cars were XC/9001, for a large comfortable car, XC/9002, for a medium-sized family car, and XC/9003, for a small town car. During 1956 Issigonis concentrated on the larger two cars, producing several prototypes for testing.
The Mini
However, at the end of 1956, following fuel rationing brought about by the Suez Crisis, Issigonis was ordered by Lord to bring the smaller car, XC/9003, to production as quickly as possible. By early 1957, prototypes were running, and by mid-1957 the project was given an official drawing office project number (ADO15) so that the thousands of drawings required for production could be produced. In August 1959 the car was launched as the Morris Mini Minor and the Austin Seven, which soon became known as the Austin Mini. In later years, the car would become known simply as the Mini. Due to time pressures, the interconnected suspension system that Issigonis had planned for the car was replaced by an equally novel, but cruder, rubber cone system designed by Alex Moulton. The Mini went on to become the best selling British car in history with a production run of 5.3 million cars. BMC and Issigonis were awarded the Dewar Trophy by the Royal Automobile Club (RAC) for the innovative design and production of the Mini. This ground-breaking design, with its front wheel drive, transverse engine, sump gearbox, 10-inch wheels, and phenomenal space efficiency, was still being manufactured in 2000 and has been the inspiration for almost all small front-wheel drive cars produced since the early 1960s.
In 1961, with the Mini gaining popularity, Issigonis was promoted to Technical Director of BMC. He continued to be responsible for his original XC projects. XC/9002 became ADO16 and was launched as the Morris 1100 with the Hydrolastic interconnected suspension system in August 1962. XC/9001 became ADO17 and was launched, also with the Hydrolastic suspension system, as the Austin 1800 in October 1964.The same principle was carried over for his next production car the Austin Maxi, However, by then he had become more aware of the cost considerations of vehicle manufacture and in service warranty costs which were crippling BMC. It certainly appeared by the Maxi development era that Issigonis wanted to "do his own thing" as cost cutting and development costs spiraled. He would instead research work on his Mini replacement the 9X with its compact transverse engine. He was also responsible for the development of the Mini Moke, initially intended for military use, which later achieved cult status.
With the creation of British Leyland in 1969, new chairman Lord Stokes quickly sidelined Issigonis and made him into what was termed "Special Developments Director", replacing him with Harry Webster as the new Technical Director (Small/Medium cars). Stokes was heard on his appointment to say: "We'll sharp sort this bloke Issigonis out!".
Acclaim as an engineer
Issigonis was nicknamed "the Greek god" by his contemporaries. Whilst he is most famous for his creation of the Mini, he was most proud of his participation in the design of the Morris Minor. He considered it to be a vehicle that combined many of the luxuries and conveniences of a good motor car with a price suitable for the working classes; in contrast to the Mini which was a spartan design. Issigonis often commented to friends and colleagues that the Austin 1800 (ADO17) was the design he was most proud of, even though it never was as commercially successful as his three preceding designs.
Issigonis officially retired from the motor industry in 1971. Although he continued working until shortly before his own death in 1988 at his house in Edgbaston, Birmingham. He was cremated at the Lodge Hill Cemetery in nearby Selly Oak.
Legacy
On 15 October 2006 a rally was held at the Heritage Motor Centre in Gaydon, England, to celebrate the centenary of Issigonis's birth.
There is a road named "Alec Issigonis Way" in the Oxford Business Park on the former site of the Morris Motors factory in Cowley, Oxfordshire.
Honours
Issigonis was appointed a Commander of the Order of the British Empire (CBE) in the 1964 Birthday Honours.
In 1964 Issigonis was appointed a Royal Designer for Industry (RDI).
He was elected a Fellow of the Royal Society (FRS) in 1967.
He was granted the rank of Knight Bachelor in the 1969 Birthday Honours and was knighted by Queen Elizabeth II during an investiture ceremony at Buckingham Palace on 22 July of the same year.
In 2003 he was inducted into the Automotive Hall of Fame in the United States.
The Weeny Issi, a car based on Mini in 2013 video game Grand Theft Auto V named in his honour.
Some of his cars
1948 Morris Minor
1948 Morris Oxford MO
1959 Mini
1962 BMC ADO16
1964 BMC ADO17
1969 Austin Maxi
Notes
References
External links
Alec Issigonis Automotive Designer (1906–1988) from the website of the Design Museum in London
Portraits of Sir Alec Issigonis at the National Portrait Gallery (London)
1906 births
1988 deaths
English people of Greek descent
English people of German descent
Smyrniote Greeks
Emigrants from the Ottoman Empire to the United Kingdom
Alumni of the University of Surrey
Alumni of the University of London
Alumni of University of London Worldwide
Knights Bachelor
Commanders of the Order of the British Empire
Fellows of the Royal Society
Royal Designers for Industry
People in the automobile industry
British automotive engineers
British automobile designers
British automotive pioneers
British industrial designers
Mini (marque)
Brighton Speed Trials people
Industrial design | Alec Issigonis | Engineering | 2,011 |
44,764,591 | https://en.wikipedia.org/wiki/Nucleus%20%28order%20theory%29 | In mathematics, and especially in order theory, a nucleus is a function on a meet-semilattice such that (for every in ):
Every nucleus is evidently a monotone function.
Frames and locales
Usually, the term nucleus is used in frames and locales theory (when the semilattice is a frame).
Proposition: If is a nucleus on a frame , then the poset of fixed points of , with order inherited from , is also a frame.
References
Order theory | Nucleus (order theory) | Mathematics | 102 |
24,808,275 | https://en.wikipedia.org/wiki/Fomite | A fomite () or fomes () is any inanimate object that, when contaminated with or exposed to infectious agents (such as pathogenic bacteria, viruses or fungi), can transfer disease to a new host.
Transfer of pathogens by fomites
A fomite is any inanimate object (also called passive vector) that, when contaminated with or exposed to infectious agents (such as pathogenic bacteria, viruses or fungi), can transfer disease to a new host. Contamination can occur when one of these objects comes into contact with bodily secretions, like nasal fluid, vomit or feces from landed toilet flushing aerosols (Toilet Plume). Many common objects can sustain a pathogen until a person comes in contact with the pathogen, increasing the chance of infection. The likely objects are different in a hospital environment than at home or in a workplace. Fomites such as splinters, barbed wire or farmyard surfaces, including soil, feeding troughs or barn beams, have been implicated as sources of virus.
Hospital fomites
For humans, common hospital fomites are skin cells, hair, clothing, and bedding.
Fomites are associated particularly with hospital-acquired infections (HAIs), as they are possible routes to pass pathogens between patients. Stethoscopes and neckties are common fomites associated with health care providers. It worries epidemiologists and hospital practitioners because of the growing selection of microbes resistant to disinfectants or antibiotics (so-called antimicrobial resistance phenomenon).
Basic hospital equipment, such as IV drip tubes, catheters, and life support equipment, can also be carriers, when the pathogens form biofilms on the surfaces. Careful sterilization of such objects prevents cross-infection. Used syringes, if improperly handled, are particularly dangerous fomites.
Daily life
In addition to objects in hospital settings, other common fomites for humans are cups, spoons, pencils, bath faucet handles, toilet flush levers, door knobs, light switches, handrails, elevator buttons, television remote controls, pens, touch screens, common-use phones, keyboards and computer mice, coffeepot handles, countertops, drinking fountains, and any other items that may be frequently touched by different people and infrequently cleaned.
Cold sores, hand–foot–mouth disease, and diarrhea are some examples of illnesses easily spread by contaminated fomites. The risk of infection by these diseases and others through fomites can be greatly reduced by simply washing one's hands. When two children in one household have influenza, more than 50% of shared items are contaminated with virus. In 40–90% cases, adults infected with rhinovirus have it on their hands.
Transmission of specific viruses
Researchers have discovered that smooth (non-porous) surfaces like door knobs transmit bacteria and viruses better than porous materials like paper money because porous, especially fibrous, materials absorb and trap the contagion, making it harder to contract through simple touch. Nonetheless, fomites may include soiled clothes, towels, linens, handkerchiefs, and surgical dressings.
SARS-CoV-2 was found to be viable on various surfaces from 4 to 72 hours under laboratory conditions. On porous surfaces, studies report inability to detect viable virus within minutes to hours; on non-porous surfaces, viable virus can be detected for days to weeks. However, further research called into question the accuracy of such tests, instead finding fomite transmission of SARS-Cov-2 in real world settings is extremely rare if not impossible.
Contact with aerosolized virus (large droplet spread) generated via talking, sneezing, coughing, or vomiting, toilet flushing & produced toilet plume or contact with airborne virus that settles after disturbance of a contaminated fomite (e.g. shaking a contaminated blanket). During the first 24 hours, the risk can be reduced by increasing ventilation and waiting as long as possible before entering the space (at least several hours, based on documented airborne transmission cases), and using personal protective equipment (including any protection needed for the cleaning and disinfection products) to reduce risk.
The 2007 research showed that the influenza virus was still active on stainless steel 24 hours after contamination. Though on hands it survives only for five minutes, the constant contact with a fomite almost certainly means catching the infection. Transfer efficiency depends not only on surface, but mainly on pathogen type. For example, avian influenza survives on both porous and non-porous materials for 144 hours.
Smallpox was long supposed to be transmitted either by direct contact or by fomites. However A. R. Rao’s careful researches in the 1960s, before smallpox was declared extinct, found little truth in the traditional belief that smallpox can be spread at a distance through infected clothing or bedding. He concluded that it normally invaded via the lungs. Rao recognized that the virus can be detected on inanimate objects, and therefore might in some cases be transmitted by them, but he concluded that “smallpox is still an inhalation disease . . . the virus has to enter through the nose by inhalation.”
In 2002 Donald K. Milton published a review of existing research upon the transmission of smallpox and upon recommendations for controlling its spread in the event of its use in biological war. He agreed, citing Rao, Fenner and others, that “careful epidemiologic investigation rarely implicated fomites as a source of infection”; and broadly agreed with current recommendations for control of secondary smallpox infections, which emphasized transmission via “expelled droplets” upon the breath. He noted that shed scabs (which might be spread via bedsheets or other fomites) often contain “large quantities of virus”, but suggested that the “apparent lack of infectiousness of scab associated virus” might be due to “encapsulation with inspissated pus”.
Contaminated needles are the most common fomite that transmits HIV. Fomites from dirty needles also easily spread Hepatitis B.
Etymology
The Italian scholar and physician Girolamo Fracastoro appears to have first used the Latin word fomes, meaning "tinder", in this sense in his essay on contagion, De Contagione et Contagiosis Morbis, published in 1546: "By fomes I mean clothes, wooden objects, and things of that sort, which though not themselves corrupted can, nevertheless, preserve the original germs of the contagion and infect by means of these".
English usage of fomes, pronounced , is documented since 1658. The English word fomite, which has been in use since 1859, is a back-formation from the plural fomites (originally borrowed from the Latin plural fōmĭtēs of fōmĕs ). Over time, the English-language pronunciation of the plural fomites changed from ) to , which led to the creation of a new singular fomite, pronounced .
In Latin, fomes (genitive: fomitis, plural fomites, stem fomit-) is a third-declension T-stem noun. Such nouns, like miles/militis or comes/comitis, typically lose their T (thereby becoming a syllable shorter) in the nominative singular, but retain it in all other cases. In languages derived from Latin, the French fomite, Italian fomite, Spanish fómite and Portuguese fómite or fômite, retain the full stem.
See also
Focal infection theory
Focus of infection
Disease vector
References
Bibliography
External links
General characteristics and roles of fomites in viral transmission, American Society for Microbiology, 1969
Infectious diseases
Epidemiology
Hygiene
Medical terminology | Fomite | Environmental_science | 1,612 |
44,470,705 | https://en.wikipedia.org/wiki/Ziegler%20process | In organic chemistry, the Ziegler process (also called the Ziegler-Alfol synthesis) is a method for producing fatty alcohols from ethylene using an organoaluminium compound. The reaction produces linear primary alcohols with an even numbered carbon chain. The process uses an aluminum compound to oligomerize ethylene and allow the resulting alkyl group to be oxygenated. The usually targeted products are fatty alcohols, which are otherwise derived from natural fats and oils. Fatty alcohols are used in food and chemical processing. They are useful due to their amphipathic nature. The synthesis route is named after Karl Ziegler, who described the process in 1955.
Process details
The Ziegler alcohol synthesis involves oligomerization of ethylene using triethylaluminium followed by oxidation. The triethylaluminium is produced by action of aluminium, ethylene, and hydrogen gas. In the production process, two-thirds of the triethylaluminium produced is recycled back into the reactor, and only one-third is used to produce the fatty alcohols. The recycling step is used to produce triethylaluminium at a higher yield and with less time. Triethylaluminium reacts with ethylene to form higher molecular weight trialkylaluminium. The number of equivalents of ethylene n equals the total number of monomer units being grown on the initial ethylene chains, where (n = x + y + z), and x, y, and z are the number of ethylene units per chain. Trialkylaluminium is oxidized with air to form aluminum alkoxides, and finally hydrolyzed to aluminum hydroxide and the desired alcohols.
Al+3ethylene+1.5H2 → Al(C2H5)3
Al(C2H5)3 n-ethylene → Al((CH2CH2)nCH2CH3)3
Al((CH2CH2)nCH2CH3)3+ O2 → Al(O(CH2CH2)nCH2CH3)3
Al(O(CH2CH2)nCH2CH3)3+3H2O → Al(OH)3 + CH3CH2(CH2CH2)nOH
The temperature of the reaction influences the molecular weight of alcohol growth. Temperatures in the range of 60-120°C form higher molecular weight trialkylaluminium while higher temperatures (e.g., 120-150 °C) cause thermal displacement reactions that afford α-olefin chains. Above 150 °C, dimerization of the α-olefins occurs.
Applications
Aluminum hydroxide, the byproduct of the synthesis, can be dehydrated to give aluminium oxide, which, at high purities, has a high commercial value. One modification of the Ziegler process is called the EPAL process. In this process, chain growth is optimized to produce alcohols with narrow molecular weight distribution. Synthesis of other alcohols use Ziegler and the updated EPAL process, such as the transalkylation of styrene to form 2-phenylethanol. Diethylaluminum hydride can be employed in place of triethylaluminium.
See also
Guerbet reaction, a route for the production of branched fatty alcohols
References
Fatty alcohols
Chemical processes | Ziegler process | Chemistry | 712 |
147,973 | https://en.wikipedia.org/wiki/Creator%20code | A creator code is a mechanism introduced in the classic Mac OS to link a data file to the application program which created it. The similar type code held the file type, like "TEXT". Together, the type and creator indicated what application should be used to open a file, similar to (but richer than) the file extensions in other operating systems.
Creator codes are four-byte OSTypes. They allow applications to launch and open a file whenever any of their associated files is double-clicked. Creator codes could be any four-byte value, but were usually chosen so that their ASCII representation formed a word or acronym. For example, the creator code of the HyperCard application and its associated "stacks" is represented in ASCII as , from the application's original name of WildCard. Occasionally they represented inside jokes. For instance, the Marathon computer game had a creator code of (the approximate length, in miles, of a marathon) and Marathon 2: Durandal had a creator code of .
The binding are stored inside the resource fork of the application as BNDL and fref resources. These resources maintained the creator code as well as the association with each type code and icon. The OS collected this data from the files when they were copied between mediums, thereby building up the list of associations and icons as software was installed onto the machine. Periodically this "desktop database" would become corrupted, and had to be fixed by "rebuilding the desktop database."
The key difference between extensions and Apple's system is that file type and file ownership bindings are kept distinct. This allows files to be written of the same type - TEXT say - by different applications. Although any application can open anyone else's TEXT file, by default, opening the file will open the original application that created it. With the extensions approach, this distinction is lost - all files with a .txt extension will be mapped to a single text editing application of the user's choosing. A more obvious advantage of this approach is allowing for double click launching of specialized editors for more complex but common file types, like .csv or .html. This can also represent a disadvantage as in the illustration above, where double clicking the four mp3 files would launch and play the files in four different music applications instead of queuing them in the user's preferred player application.
macOS retains creator codes, but supports extensions as well. However, beginning with Mac OS X Snow Leopard, creator codes are ignored by the operating system. Creator codes have been internally superseded by Apple's Uniform Type Identifier scheme, which manages application and file type identification as well as type codes, creator codes and file extensions.
To avoid conflicts, Apple maintained a database of creator codes in use. Developers could fill out an online form to register their codes. Apple reserves codes containing all lower-case ASCII characters for its own use.
Creator codes are not readily accessible for users to manipulate, although they can be viewed and changed with certain software, most notably the macOS command line tools GetFileInfo and SetFile which are installed as part of the developer tools into /Developer/Tools.
See also
Type code
Uniform Type Identifier
References
External links
How application binding policy changed in Snow Leopard
Macintosh operating systems
Metadata | Creator code | Technology | 666 |
14,432,911 | https://en.wikipedia.org/wiki/Puppet%20%28software%29 | Puppet is a software configuration management tool developed by Puppet Inc., which is owned by Perforce, which is owned in turn by private equity firms. Puppet is used to manage stages of the IT infrastructure lifecycle.
Puppet uses an open-core model; its free-software version was released under version 2 of the GNU General Public License (GPL) until version 2.7.0, and later releases use the Apache License, while Puppet Enterprise uses a proprietary license.
Puppet and Puppet Enterprise operate on multiple Unix-like systems (including Linux, Solaris, BSD, Mac OS X, AIX, HP-UX) and has Microsoft Windows support. Puppet itself is written in Ruby. Facter, Puppet’s cross-platform system profiling library, is written in C++. Puppet Server and Puppet DB are written in Clojure.
Design
Puppet consists of a custom declarative language to describe system configuration.
Puppet is model-driven, requiring limited programming knowledge to use.
Puppet is designed to manage the configuration of Unix-like and Microsoft Windows systems declaratively.
Architecture
Puppet follows client-server architecture. The client is known as an agent and the server is known as the master. For testing and simple configuration, it can also be used as a stand-alone application run from the command line.
Puppet Server is installed on one or more servers, and Puppet Agent is installed on all the machines to be managed. Puppet Agents communicate with the server and fetch configuration instructions. The Agent then applies the configuration on the system and sends a status report to the server.
Puppet resource syntax:
type { 'title':
attribute => value
}
Example resource representing a Unix user:
user { 'harry':
ensure => present,
uid => '1000',
shell => '/bin/bash',
home => '/home/harry'
}
Vendor and Perforce Acquisition
Puppet's vendor, Puppet Inc., is a privately held information technology (IT) automation software company based in Portland, Oregon, USA.
In 2005, Puppet was founded by former CEO Luke Kanies. On Jan. 29, 2019 Yvonne Wassenaar replaced Sanjay Mirchandani as CEO. Wassenaar previously worked at Airware, New Relic and VMware. In February 2011 Puppet released its first commercial product, Puppet Enterprise, built on its open-source base, with some extra commercial components. Puppet purchased the infrastructure automation firm Distelli in September 2017. Puppet rebranded Distelli's VM Dashboard (a continuous integration / continuous delivery product) as Puppet Pipelines for Applications, and K8s Dashboard as Puppet Pipelines for Containers. The products were made generally available in October, 2017. In May 2018, Puppet released Puppet Discovery, a tool to discover and manipulate resources in hybrid networks. In June 2018, Puppet raised an additional $42 million for a total of $150 million in funding. The round was led by Cisco and included Kleiner Perkins, True Ventures, EDBI, and VMware. Puppet's partners include VMware, Amazon Web Services, Cisco, OpenStack, Microsoft Azure, Eucalyptus, and Zenoss.
In April 2022, it was announced Puppet had been acquired by the Minneapolis-headquartered software developer, Perforce. The company subsequently laid off 15% of Puppet's workforce in Portland.
See also
Comparison of open-source configuration management software
CFEngine
References
External links
Companies based in Portland, Oregon
American companies established in 2005
Privately held companies based in Oregon
Information technology companies of the United States
2005 establishments in Oregon
Software companies established in 2005
2005 software
Orchestration software
Configuration management
Cross-platform free software
Free software programmed in Ruby
Software using the Apache license
Virtualization software for Linux | Puppet (software) | Engineering | 760 |
45,668,983 | https://en.wikipedia.org/wiki/Sludge%20volume%20index | Sludge Volume Index (SVI) is a process control parameter used to describe the settling characteristics of sludge in the aeration tank of an activated sludge process. It was introduced by Mohlman in 1934 and has become one of the standard measures of the physical characteristics of activated sludge processes. The SVI is often used to assess if process performance issues are related to the proliferation of problematic filamentous organisms that cause poor settling in secondary clarification processes.
It is defined as 'the volume (in mL) occupied by 1 gram of activated sludge after settling the aerated liquid for 30 minutes' and can be calculated as follows:
SVI (mL/g) = settled sludge volume (mL/L)/mixed liquor suspended solids(MLSS)(mg/L) * 1000 (mg/g)
The sludge is often too thick and has to be diluted with clarified secondary effluent before analyzing the SVI. In the diluted SVI (DSVI) test, the sludge sample is serially diluted until the 30-minute sludge volume is less than 200 mL. Clarified (or filtered) secondary effluent is used to prevent osmotic stress on the biomass that may affect the outcome. The modified equation for determining the DSVI is:
DSVI (mL/g) = diluted settled sludge volume (mL/L)/MLSS (mg/L) * 1000 [mg/g] * (total volume [mL]/original sludge sample volume [mL])
References
Waste treatment technology | Sludge volume index | Chemistry,Engineering | 325 |
63,801,647 | https://en.wikipedia.org/wiki/Linear%20chain%20compound | In chemistry and materials science, linear chain compounds are materials composed of one-dimensional arrays of metal-metal bonded molecules or ions. Such materials exhibit anisotropic electrical conductivity.
Examples
Many linear chain compounds feature square planar complexes. One example is , which stack with distances of about 326 pm. Classic examples include Krogmann's salt and Magnus's green salt. Another example is the partially oxidized derivatives of . The otherwise ordinary complex gives an electrically conductive derivative upon oxidation, e.g., with bromine to give , where x ~0.05. Related chlorides have the formulae and .
In contrast to linear chain compounds, extended metal atom chains (EMACs) are molecules or ions that consist of a finite, often short, linear strings of metal atoms, surrounded by organic ligands.
One group of platinum chains is based on alternating cations and anions of (R = iPr, , ) and . These may be able to be used as vapochromic sensor materials, or materials which change color when exposed to different vapors.
Linear chains of Pd-Pd bonds protected by a "π-electron sheath" are known.
Not only do these olefin-stabilized metal chains constitute a significant contribution to the field of organometallic chemistry, both the complex's metal atom structures and the olefin ligands themselves can conduct a current.
Methodology
Some linear chain compounds are produced or fabricated by electrocrystallization. The technique is used to obtain single crystals of low-dimensional electrical conductors.
See also
platinum pop
References
Nanotechnology
Conductive polymers
Molecular electronics
Semiconductor material types | Linear chain compound | Chemistry,Materials_science,Engineering | 335 |
58,746,462 | https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Nuclear%20Science | IEEE Transactions on Nuclear Science is a peer-reviewed scientific journal published monthly by the IEEE. Sponsored by IEEE Nuclear and Plasma Sciences Society, the journal covers the theory, technology, and application areas related to nuclear science and engineering. Its editor-in-chief is Zane Bell (Oak Ridge National Laboratory).
The journal was founded in 1954 under the name Transactions of the Institute of Radio Engineers Professional Group on Nuclear Science and was retitled to IRE Transactions on Nuclear Science the following year. Its title was changed to its current name in 1963.
According to the Journal Citation Reports, the journal has a 2022 impact factor of 1.8.
References
External links
Nuclear Science, IEEE Transactions on
Nuclear physics journals
Academic journals established in 1954
English-language journals
Monthly journals | IEEE Transactions on Nuclear Science | Physics | 153 |
76,540,369 | https://en.wikipedia.org/wiki/Timekeeping%20on%20the%20Moon | Timekeeping on the Moon is an issue of synchronized human activity on the Moon and contact with such. The two main differences to timekeeping on Earth are the length of a day on the Moon, being the lunar day or lunar month, observable from Earth as the lunar phases, and the rate at which time progresses, with 24 hours on the Moon being 58.7 microseconds (0.0000587 seconds) faster, resulting from gravitational time dilation due to the different masses of the Moon and Earth.
History
The technology used for the timekeeping devices deployed to the Moon have varied over the decades. Several Omega Speedmasters have been on the Moon, synched to Central Standard Time (CST).
The Apollo Guidance Computer (AGC) kept a triple-precision count of time in a real time clock cuing from a quartz oscillator; a standby option (although never used) would allow it to update this count every 1.28 second (~0.78 hertz) — more often when not standing by. In addition to maintaining the clock cycle, computer timekeeping allowed the AGC to display the capsule's vertical and horizontal movements relative to the Moon's surface, in units of feet per second.
Coordinated Lunar Time
Coordinated Lunar Time (LTC) is a proposed primary lunar time standard for the Moon. In early April 2024, the White House asked NASA to work alongside US and international agencies for the purpose of establishing a unified standard time for the Moon and other celestial bodies by 2026. The White House's request, led by the Office of Science and Technology Policy (OSTP), called for a "Coordinated Lunar Time", which was first proposed by the European Space Agency in early 2023.
, there is no lunar time standard. As a result, activities on the Moon are coordinated using the time zone of where a mission's headquarters is based. For example, the Apollo missions utilized the Central Time Zone as the missions were controlled from Houston, Texas. Likewise, Chinese activities on the Moon run on China Standard Time. As more countries are active on the Moon and interact with each other, a different, unified system will be needed.
As part of an ongoing global billionaire space race and a wider international space race between the United States and China, a need exists for a universal time-keeping benchmark so that lunar spacecraft and satellites are able to fulfill their respective missions with precision and accuracy. Due to differences in gravitational force and other factors, time passes fractionally faster on the Moon when observed from Earth.
Under the Artemis program, and supported by the Commercial Lunar Payload Services missions, astronauts and a proposed scientific moonbase are envisioned to take place on and around the lunar surface from the 2020s onwards. The proposed standard would therefore solve a timekeeping issue. According to OSTP Chief Arati Prabhakar, time would "appear to lose on average 58.7 microseconds per Earth-day and come with other periodic variations that would further drift Moon time from Earth time".
The development of the standard is set to be a collaborative effort, initially amongst members of the Artemis Accords, but will be meant to apply globally. The initial proposal of the standard calls for four key features:
traceability back to Coordinated Universal Time,
accuracy sufficient for navigation and science,
resilience to disruptions, and
scalability to potential environments beyond cislunar space.
LunaNet, an upcoming lunar communications and navigation service under development with the European Space Agency, calls for a Lunar Time System Standard which the LTC is meant to address.
In August 2024, the US National Institute of Standards and Technology furthered development of the proposal by releasing a draft for the standard focused on defining the framework and mathematical model. The draft takes into account the gravitational differences on the Moon and was published to The Astronomical Journal.
See also
References
timekeeping
moon | Timekeeping on the Moon | Physics | 791 |
2,544,148 | https://en.wikipedia.org/wiki/National%20Aerospace%20Laboratory%20of%20Japan | The National Aerospace Laboratory of Japan (NAL), was established in July 1955. Originally known as the National Aeronautical Laboratory, it assumed its present name with the addition of the Aerospace Division in 1963. Since its establishment, it has pursued research on aircraft, rockets, and other aeronautical transportation systems, as well as peripheral technology. NAL was involved in the development of the autonomous ALFLEX aircraft and the cancelled HOPE-X spaceplane.
NAL has also endeavored to develop and enhance large-scale test facilities and make them available for use by related organizations, with the aim of improving test technology in these facilities.
The NAL began using computers to process data since the 1960s. It began working to develop supercomputer and numerical simulation technologies in order to execute full-scale numeric simulations. The NAL, in collaboration with Fujitsu, developed the Numerical Wind Tunnel parallel supercomputer system, which went into operation in 1993. From 1993 to 1995, it was the most power supercomputer in the world, and was one of the top 3 in the world until 1997. It remained in use for 9 years after it began operations.
On October 1, 2003, NAL, which had focused on research and development of next-generation aviation, merged with the Institute of Space and Astronautical Science (ISAS), and the National Space Development Agency (NASDA) of Japan into one Independent Administrative Institution: the Japan Aerospace Exploration Agency (JAXA).
References
1955 establishments in Japan
JAXA
Aeronautics organizations
Aviation research institutes
Aerospace research institutes | National Aerospace Laboratory of Japan | Engineering | 315 |
17,972,479 | https://en.wikipedia.org/wiki/Luxemburg%E2%80%93Gorky%20effect | In radiophysics, the Luxemburg–Gorky effect (named after Radio Luxemburg and the city of Gorky (Nizhny Novgorod)) is a phenomenon of cross modulation between two radio waves, one of which is strong, passing through the same part of a medium, especially a conductive region of atmosphere or a plasma.
Current theory seems to be that the conductivity of the ionosphere is affected by the presence of strong radio waves. The strength of a radio wave returning from the ionosphere to a distant point is dependent on this conductivity level. Therefore, if station "A" is radiating a strong amplitude modulated radio signal all around, some of it will modulate the conductivity of the ionosphere above the station. Then if station "B" is also sending an amplitude modulated signal from another location, the part of station "B's" signal that passes through the ionosphere disturbed by station "A" to a receiver in line with both stations may have its strength modulated by the station "A" signal, even though the two are widely apart in frequency.
In other words, the ionosphere passes the station "B" signal with a strength that varies in step with the modulation (voice, etc.) of station "A." This re-modulation level of the station "B" signal is usually only a few percent, but is enough to make both stations audible. The interference (both stations simultaneously received) goes away as the receiver is tuned slightly away from the frequency of "B."
See also
Distortion
Radio propagation
Plasma physics
Notes
References
.
. In the paper "An hereditary theory of the Luxemburg effect" (English translation of the title), written only few years after the discovery of the effect itself, Dario Graffi proposes a theory of the Luxemburg effect based on Volterra's theory of hereditary phenomena.
Radio spectrum | Luxemburg–Gorky effect | Physics | 385 |
9,246,889 | https://en.wikipedia.org/wiki/Netgraph | netgraph is the graph based kernel networking subsystem of FreeBSD since 3.4 and DragonFly BSD since the fork from FreeBSD. Netgraph provides support for L2TP, PPTP, ATM, Bluetooth using a modular set of nodes that are the graph.
Netgraph has also been ported on other Operating Systems:
NetBSD kernel 1.5V (not integrated into mainline kernel)
Linux kernel 2.4 and 2.6 by 6WIND (Commercial closed source port)
Linux kernel 3.0 by LANA
History
Netgraph was originally designed and implemented at Whistle Communications by Julian Elischer and Archie Cobbs for the Whistle InterJet small office router product. The purpose of the project was to create a flexible framework for implementing new networking protocols. Key requirements included the ability to prototype with user-space programs while still retaining the ability to interact with data flows normally hidden within the kernel.
References
External links
netgraph(4) man page
Netgraph article
BSD_software
Free network-related software | Netgraph | Technology | 211 |
36,967,101 | https://en.wikipedia.org/wiki/Iota%20Coronae%20Borealis | Iota Coronae Borealis, Latinized from ι Coronae Borealis, is a binary star system in the constellation Corona Borealis. It is visible to the naked eye with a combined apparent visual magnitude of is 4.96. Based upon an annual parallax shift of 10.46 mas as seen from the Earth, it is located about 312 light years from the Sun.
This is a single-lined spectroscopic binary with an orbital period of 35.5 days and an eccentricity of 0.56. The visible member, component A, has a stellar classification of , indicating it is a chemically peculiar mercury-manganese star with narrow absorption lines. The secondary member, component B, appears to be an A-type star.
References
A-type giants
Mercury-manganese stars
Spectroscopic binaries
Corona Borealis
Coronae Borealis, Iota
Durchmusterung objects
Coronae Borealis, 14
143807
078493
5971 | Iota Coronae Borealis | Astronomy | 200 |
32,192,223 | https://en.wikipedia.org/wiki/Symphytum%20%C3%97%20uplandicum | Russian comfrey or Quaker comfrey (Symphytum × uplandicum, syn. S. peregrinum auct.) is a common hybrid between Symphytum officinale and S. asperum. It represents the economically most important kind of comfrey.
It occurs naturally in Caucasus region where it grows in waste areas and disturbed soils. It has been introduced as a crop in many places around the world. and is widespread in the British Isles and interbreeds with S. officinale.
Description
It grows as a perennial herb. It grows to heights of up to 2 meters (6'). Above ground the plant is hairy, but not spiny. Its root system has a pronounced, deep-reaching taproot.
Along the erect, branched stems grow large simple, mostly stalked leaves. On the lower stem they are arranged in an alternate pattern. In the upper parts they may be opposite and are stalkless, shortly decurrent, or more or less fused around the stem. The leaf blade is up to 25 centimeters (10") long and never cordate. There are no stipules.
The flowering period is from May to August. The inflorescences are forked cymes. It does not have bracts.
The hermaphrodite flowers are radially symmetrical with five-petals and dichlamydeous perianth. The five sepals are fused into a 5 to 7 millimetres (around ¼") long calyx with usually pointed calyx lobes. The five petals are either initially pink and later blue or permanently purple. The corolla measures 12 to 18 millimetres (¼" to ¾") in diameter. The filaments of the five stamens are narrower than their anthers.
The fruits are segmented into four egg-shaped nutlets whose surfaces are brown, dull, and finely granular. They measure 3 to 4 by 2 to 2.5 millimeters.
The chromosome count is 2n = 36.
Similar species
Symphytum officinale has decurrent leaf bases, winged stem internodes, and seeds with shiny black surfaces.
Taxonomy
Symphytum uplandicum is an interspecific hybrid between Symphytum asperum and Symphytum officinale. It is itself parent to the multiple-cross hybrids Symphytum × hidcotense P. D. Sell: (together with Symphytum grandiflorum DC.) and Symphytum × perringianum P. H. Oswald & P. D. Sell (together with Symphytum orientale L.).
The epithet uplandicum refers to the Swedish province of Uppland, where the observation for the official first formal scientific species description was made by Carl Frederik Nyman and published in 1855 in Sylloge Florae Europaeae.
Symphytum peregrinum auct. non Lepech. is regarded as a synonymous.
Several forms have been described, initially as independent hybrids.
Uses
Its hybrid vigour (→yield potential) makes Russian comfrey the preferred Symphytum crop. After two years of establishment, the robust and easy perennial crop enables highest protein yields. In addition to medicinal, horticultural and ornamental use, it is also known as animal food and even for human consumption. However, concerns about possible liver damage due to prolonged uptake of the pyrrolizidine alkaloids contained in the plant have been a cause for restraint in its use for some time, especially in food use (the "comfrey crisis"). Since around the year 2000, there are even international bans on products containing comfrey. Since 2008, an alkaloid-free variety has been known.
Notable cultivated varieties are "Bocking No. 4" and "Bocking No. 14" from the English Henry Doubleday Research Association (HDRA), as well as "Harras" as the first alkaloid-free cultivar. Bocking No. 14 is an early-season variety that high in allantoin and potassium and resistant to comfrey rust. Lower in allantoin and higher in protein, Bocking No. 4 is recommended for human consumption and for feeding poultry.
The plant is also a good nectar source.
Several ornamental varieties exist, e.g. with variegated leaves or different flower colours.
Medicinal uses
The plant parts of Russian comfrey are used for medicinal purposes (mainly because of the allantoin content). They are made into a salve that accelerates wound healing and relieves muscle and joint pain, among other things.
Garden uses
The plants are soil-tolerant heavy feeders with high biomass production. The protein- and therefore nitrogen-rich leaves are valued as a high-quality fertiliser. They are used, for example, for making a fermented liquid fertiliser or as mulch. As the plant can accept rather aggressive raw manure in larger amounts, it can be used to convert it into a more amenable fertiliser. The deep root system loosens the soil and accesses nutrients from greater depths, transporting them to higher soil layers via decaying plant matter.
Cultivation history
Catherine II traditionally employed garden masters from England or Scotland at her palace in St. Petersburg. In this capacity, Joseph Busch had, since the late 18th century, planted beds of prickly and common comfrey, flowering side by side for an interesting contrast of colour, and had already sent various comfrey plants to his business successor back in London. Having experimented with prickly comfrey for agricultural use since 1810, gardener and inventor Henry Doubleday heard of comfrey's sticky properties when he was searching for a substitute for the unreliable supply of gum arabic, hoping to be able to develop a new adhesive for postage stamps. In the early 1870s he used this connection to order comfrey plants from Busch's successor. The imperial gardener did not touch his predecessor's well-established plantings, but instead he sent chance seedlings that had grown between the rows: F1 hybrids of prickly and common comfrey. From 1877, Thomas Christy's book Forage Crops made this "Russian" comfrey known as a crop.
Sources
Bogumil Pawłowski: Symphytum. In:
uplandicum
Hybrid plants
Plants described in 1855 | Symphytum × uplandicum | Biology | 1,301 |
1,589,135 | https://en.wikipedia.org/wiki/Hyperbolic%20manifold | In mathematics, a hyperbolic manifold is a space where every point looks locally like hyperbolic space of some dimension. They are especially studied in dimensions 2 and 3, where they are called hyperbolic surfaces and hyperbolic 3-manifolds, respectively. In these dimensions, they are important because most manifolds can be made into a hyperbolic manifold by a homeomorphism. This is a consequence of the uniformization theorem for surfaces and the geometrization theorem for 3-manifolds proved by Perelman.
Rigorous definition
A hyperbolic -manifold is a complete Riemannian -manifold of constant sectional curvature .
Every complete, connected, simply-connected manifold of constant negative curvature is isometric to the real hyperbolic space . As a result, the universal cover of any closed manifold of constant negative curvature is . Thus, every such can be written as where is a torsion-free discrete group of isometries on . That is, is a discrete subgroup of . The manifold has finite volume if and only if is a lattice.
Its thick–thin decomposition has a thin part consisting of tubular neighborhoods of closed geodesics and ends which are the product of a Euclidean ()-manifold and the closed half-ray. The manifold is of finite volume if and only if its thick part is compact.
Examples
The simplest example of a hyperbolic manifold is hyperbolic space, as each point in hyperbolic space has a neighborhood isometric to hyperbolic space.
A simple non-trivial example, however, is the once-punctured torus. This is an example of an (Isom(), )-manifold. This can be formed by taking an ideal rectangle in – that is, a rectangle where the vertices are on the boundary at infinity, and thus don't exist in the resulting manifold – and identifying opposite images.
In a similar fashion, we can construct the thrice-punctured sphere, shown below, by gluing two ideal triangles together. This also shows how to draw curves on the surface – the black line in the diagram becomes the closed curve when the green edges are glued together. As we are working with a punctured sphere, the colored circles in the surface – including their boundaries – are not part of the surface, and hence are represented in the diagram as ideal vertices.
Many knots and links, including some of the simpler knots such as the figure eight knot and the Borromean rings, are hyperbolic, and so the complement of the knot or link in is a hyperbolic 3-manifold of finite volume.
Important results
For the hyperbolic structure on a finite volume hyperbolic -manifold is unique by Mostow rigidity and so geometric invariants are in fact topological invariants. One of these geometric invariants used as a topological invariant is the hyperbolic volume of a knot or link complement, which can allow us to distinguish two knots from each other by studying the geometry of their respective manifolds.
See also
Hyperbolic 3-manifold
Hyperbolic space
Hyperbolization theorem
Margulis lemma
Normally hyperbolic invariant manifold
References
Hyperbolic geometry
Manifolds
Riemannian manifolds | Hyperbolic manifold | Mathematics | 635 |
11,558,054 | https://en.wikipedia.org/wiki/Microelectrode | A microelectrode is an electrode used in electrophysiology either for recording neural signals or for the electrical stimulation of nervous tissue (they were first developed by Ida Hyde in 1921). Pulled glass pipettes with tip diameters of 0.5 μm or less are usually filled with 3 molars potassium chloride solution as the electrical conductor. When the tip penetrates a cell membrane the lipids in the membrane seal onto the glass, providing an excellent electrical connection between the tip and the interior of the cell, which is apparent because the microelectrode becomes electrically negative compared to the extracellular solution. There are also microelectrodes made with insulated metal wires, made from inert metals with high Young modulus such as tungsten, stainless steel, or platinum-iridium alloy and coated with glass or polymer insulator with exposed conductive tips. These are mostly used for recording from the external side of the cell membrane. More recent advances in lithography have produced silicon-based microelectrodes.
See also
Single-unit recording
Microelectrode array
References
Neurophysiology
Physiology
Electrophysiology
Laboratory techniques | Microelectrode | Chemistry,Biology | 238 |
18,486,546 | https://en.wikipedia.org/wiki/Guanoxan | Guanoxan is a sympatholytic drug that was marketed as Envacar by Pfizer in the UK to treat high blood pressure. It was not widely used and was eventually withdrawn from the market due to liver toxicity.
References
Adrenergic release inhibitors
Benzodioxans
Guanidines
Hepatotoxins
Withdrawn drugs | Guanoxan | Chemistry | 75 |
56,835,885 | https://en.wikipedia.org/wiki/Fairfield%20Experiment | The Fairfield experiment was an experiment in industrial relations carried out at the Fairfield Shipbuilding and Engineering Company, Glasgow, during the 1960s. The experiment was initiated by Sir Iain Maxwell Stewart, industrialist, chairman of Thermotank Ltd, and signatory to the Marlow Declaration of the early 1960s, and supported by George Brown, the First Secretary in Harold Wilson's cabinet, in 1966. The company was facing closure, and Brown agreed to provide £1 million (£13,135,456.90 in 2021 terms) to enable the Trade Unions, the management, and the shareholders to try out new ways of industrial management.
The Bowler and the Bunnet
The Bowler and the Bunnet was a film directed by Sean Connery and written by Cliff Hanley about the Fairfield Experiment.
References
Operations research
Govan
Shipbuilding
Industry in Scotland | Fairfield Experiment | Mathematics,Engineering | 167 |
89,231 | https://en.wikipedia.org/wiki/Deamination | Deamination is the removal of an amino group from a molecule. Enzymes that catalyse this reaction are called deaminases.
In the human body, deamination takes place primarily in the liver; however, it can also occur in the kidney. In situations of excess protein intake, deamination is used to break down amino acids for energy. The amino group is removed from the amino acid and converted to ammonia. The rest of the amino acid is made up of mostly carbon and hydrogen, and is recycled or oxidized for energy. Ammonia is toxic to the human system, and enzymes convert it to urea or uric acid by addition of carbon dioxide molecules (which is not considered a deamination process) in the urea cycle, which also takes place in the liver. Urea and uric acid can safely diffuse into the blood and then be excreted in urine.
Deamination reactions in DNA
Cytosine
Spontaneous deamination is the hydrolysis reaction of cytosine into uracil, releasing ammonia in the process. This can occur in vitro through the use of bisulfite, which deaminates cytosine, but not 5-methylcytosine. This property has allowed researchers to sequence methylated DNA to distinguish non-methylated cytosine (shown up as uracil) and methylated cytosine (unaltered).
In DNA, this spontaneous deamination is corrected for by the removal of uracil (product of cytosine deamination and not part of DNA) by uracil-DNA glycosylase, generating an abasic (AP) site. The resulting abasic site is then recognised by enzymes (AP endonucleases) that break a phosphodiester bond in the DNA, permitting the repair of the resulting lesion by replacement with another cytosine. A DNA polymerase may perform this replacement via nick translation, a terminal excision reaction by its 5'⟶3' exonuclease activity, followed by a fill-in reaction by its polymerase activity. DNA ligase then forms a phosphodiester bond to seal the resulting nicked duplex product, which now includes a new, correct cytosine (Base excision repair).
5-methylcytosine
Spontaneous deamination of 5-methylcytosine results in thymine and ammonia. This is the most common single nucleotide mutation. In DNA, this reaction, if detected prior to passage of the replication fork, can be corrected by the enzyme thymine-DNA glycosylase, which removes the thymine base in a G/T mismatch. This leaves an abasic site that is repaired by AP endonucleases and polymerase, as with uracil-DNA glycosylase.
Cytosine deamination increases C-To-T mutations
A known result of cytosine methylation is the increase of C-to-T transition mutations through the process of deamination. Cytosine deamination can alter the genome's many regulatory functions; previously silenced transposable elements (TEs) may become transcriptionally active due to the loss of CPG sites. TEs have been proposed to accelerate the mechanism of enhancer creation by providing extra DNA that is compatible with the host transcription factors that eventually have an impact on C-to-T mutations.
Guanine
Deamination of guanine results in the formation of xanthine. Xanthine, however, still pairs with cytosine.
Adenine
Deamination of adenine results in the formation of hypoxanthine. Hypoxanthine, in a manner analogous to the imine tautomer of adenine, selectively base pairs with cytosine instead of thymine. This results in a post-replicative transition mutation, where the original A-T base pair transforms into a G-C base pair.
Additional proteins performing this function
APOBEC1
APOBEC3A-H, APOBEC3G - affects HIV
Activation-induced cytidine deaminase (AICDA)
Cytidine deaminase (CDA)
dCMP deaminase (DCTD)
AMP deaminase (AMPD1)
Adenosine Deaminase acting on tRNA (ADAT)
Adenosine Deaminase acting on dsRNA (ADAR)
Double-stranded RNA-specific editase 1 (ADARB1)
Adenosine Deaminase acting on mononucleotides (ADA)
Guanine Deaminase (GDA)
See also
Adenosine monophosphate deaminase deficiency type 1
Hofmann elimination
References
Biochemical reactions
Metabolism
Substitution reactions | Deamination | Chemistry,Biology | 978 |
1,414,876 | https://en.wikipedia.org/wiki/Substitute%20natural%20gas | Substitute natural gas (SNG), or synthetic natural gas, is a fuel gas (predominantly methane, CH4) that can be produced from fossil fuels such as lignite coal, oil shale, or from biofuels (when it is named bio-SNG) or using electricity with power-to-gas systems.
SNG in the form of LNG or CNG can be used in road, rail, air and marine transport vehicles as a substitute for costly diesel, petrol, etc. The carbon footprint of SNG derived from coal is comparable to petroleum products. Bio-SNG has a much smaller carbon footprint when compared to petroleum products. LPG can also be produced by synthesising SNG with partial reverse hydrogenation at high pressure and low temperature. LPG is more easily transportable than SNG, more suitable as fuel in two-wheeler or smaller HP vehicles/engines, and also fetches higher price in international market due to short supply.
Renewable electrical energy can also be used to create SNG (methane) via for example electrolysis of water or via a PEM fuel cell in reverse to create hydrogen which is then reacted with from for example, CSS/U Utilisation in the Sabatier reaction.
+ 4H2 → CH4 + 2H2O
Distribution
It is advantageous to distribute SNG and bio-SNG together with natural gas in a gas grid. In this way, the production of renewable gas can be phased in at the same rate as the production capacity is increased. The gas market and infrastructure the natural gas has contributed with is a condition for large scale introduction of renewable biomethane produced through anaerobic digestion (biogas) or gasification and methanation bio-SNG.
Projects
The Great Plains Synfuels Plant injects approximately 4.1 million m3/day of SNG from lignite coal into the United States national gas grid. The production process of SNG at the Great Plains plant involves gasification, gas cleaning, shift, and methanation. China is constructing nearly 30 nos massive SNG production plants from coal / lignite with aggregate annual capacity of 120 billion standard cubic meters of SNG.
See also
Landfill gas
Renewable natural gas
Oil shale gas
Power to gas
References
External links
SGC Rapport 187 Substitute natural gas from biomass gasification
SGC-rapport on gasification and methanation
Natural gas
Synthetic fuel technologies | Substitute natural gas | Chemistry | 495 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.